RDNA4

Depends how much of a product stack they'd want. Maybe 8800XT, 8700XT, 8700 for N48 then 86XT to 85XT for N44? The perf/price/value gap may be far too great without it and if they can get more money for cheap parts why not from their perspective
Well as I said, the yields would probably be good enough that there would likely be a low percentage of dies that actually have a quarter cut due to defects. Look at N31, even the most cut down part had 16 out of 96 CUs cut, and it came a while later. And you don't want to intentionally cut good dies to sell a lower priced part. So the heavily salvaged part is likely to be limited volume.
Yes, you are always going to have asome percentage of dies that need to be salvaged, if yields are that good then you want a big cut to salvage the most into a single product.
They really need that 192bit N48 to fall inbetween the two N32, which seems likely if they are giving it 19Gbps GDDR6.

The main issue is how high they can push N44 and not be bandwidth limited? A 20% bump over N33 seems doable but how much farther...
If they can push N44 to ~3.7ghz that could net them ~1.35x of N33 and be within ~10% of 7700XT, but that might need 20Gbps DDR6.
The other issue is, could they do a clam shell mode with both 4Gib and 8Gib chips and give N44 12GB of VRAM.

I have been tweaking this lineup for quite awhile but this seems to make the most sense to me, though maybe a bit optimistic in some places.
Prices are obviously up in the air, Just depends on final design/performance and a bit on the branding/model name.

N48 16GB 64CU/128ROPs @ 3.4ghz 250w ~7900XT (+50%7700XT,+30%7800XT)
N48bin 16GB 56CU/128ROPs @ 3.2ghz 220w ~7900GRE (+30%7700XT)
N48cut 12GB 48CU/96ROPs @ 3.0ghz 180w ~RTX4070 (+60%7600XT,+10%7700XT)
N44 12GB 32CU/64ROPs @ 3.7ghz 160w ~3070Ti (+35%7600XT, +10%4060Ti)
N44bin 12GB 32CU/64ROPs @ 3.2ghz 140w ~6700XT (+15%7600XT)
N44cut 8GB 28CU/64ROPs @ 2.8ghz 100w ~7600XT/RTX4060 (OEM?)

Edit- Removed my naming and pricing estimates since they were likely way off.
See above on my point on the cut down die.

3.7 GHz for N44 seems difficult. Rumours indicated 3.3-3.4 ghz at most and for a lower end part you'd want to keep the power draw under control. MLID indicated even lower for N48 in his recent video (2.9-3.2 ghz) and he's been fairly accurate with AMD leaks as of late. And I think the vram is likely to be 8 GB which is not ideal. N48 is likely to stick to 96 ROPs just like N32 I think?

What I hope for this time is AMD to push for more notebook market share instead of letting Nvidia take practically the whole market. N48 and N44 are ideal mobile parts and I'd like to see a more reasonable 12/16 GB VRAM option available.
 
Well as I said, the yields would probably be good enough that there would likely be a low percentage of dies that actually have a quarter cut due to defects. Look at N31, even the most cut down part had 16 out of 96 CUs cut, and it came a while later. And you don't want to intentionally cut good dies to sell a lower priced part. So the heavily salvaged part is likely to be limited volume.

See above on my point on the cut down die.

3.7 GHz for N44 seems difficult. Rumours indicated 3.3-3.4 ghz at most and for a lower end part you'd want to keep the power draw under control. MLID indicated even lower for N48 in his recent video (2.9-3.2 ghz) and he's been fairly accurate with AMD leaks as of late. And I think the vram is likely to be 8 GB which is not ideal. N48 is likely to stick to 96 ROPs just like N32 I think?

What I hope for this time is AMD to push for more notebook market share instead of letting Nvidia take practically the whole market. N48 and N44 are ideal mobile parts and I'd like to see a more reasonable 12/16 GB VRAM option available.
N32 was 3SE. N48 rumors seem to indicate 4SE.
So if N48 is 4SE and 64CU in the full chip, then a small cut would be 56CUs. Cutting it down to 3SE would get the 96ROPs, 48CUs, and 192bit.

If N44 can't clock that high... then there is a huge gap between N44 @ +15-20% 7600XT and the cutdown N48 w/ 48CUs that would end up around 7800XT perf.
That might mean another cutdown N48 with 42CUs to fill the gap or they have a bunch of 7700XT to sell for the foreseeable future.

Edit- I guess AMD was fine with the ~45% performance gap between 7700XT and 7600XT, though that was filled with the 6700/6750 RDNA2 parts.
 
Last edited:
But given its a much smaller GPU, yields should be fairly good. Would such a heavily cut down part make sense?

They could be trying to really push clockspeeds. This would cut yields into more bins pretty conveniently as even if there's no defects less dies would reach power/clockspeed for the highest bin. I'd love to speculate on what the actual clockspeeds are, but other than that "3.4ghz" leak IDK. And for that I'd expect the smaller 32CU die to hit that, maaaybe, not the bigger one.

As for notebook SKUs I'd expect that if they were launching desktop versions in the next month or two they'd save notebook announcements for CES.
 
Last edited:
Well as I said, the yields would probably be good enough that there would likely be a low percentage of dies that actually have a quarter cut due to defects. Look at N31, even the most cut down part had 16 out of 96 CUs cut, and it came a while later. And you don't want to intentionally cut good dies to sell a lower priced part. So the heavily salvaged part is likely to be limited volume.
Right but what I mean is what are the perf jumps and price points. Say N44 is roughly a $250 + $300 part for 8600/XT, with reasonable perf scaling and linear value that'd be a jump to $500 and $600 parts. Maybe they are really aggressive with prices, $200 + $250 then $400 + $500? Maybe with this added they can have a $379-$429-$499 price scheme, sell the worst ones (supply doesn't have to be great) and slightly bump up the price of the cut down die making more money than you otherwise would've with a price ladder that makes sense to spend the extra on it. That way you've artificially increased the price of your cut down N48
 
They could be trying to really push clockspeeds. This would cut yields into more bins pretty conveniently as even if there's no defects less dies would reach power/clockspeed for the highest bin. I'd love to speculate on what the actual clockspeeds are, but other than that "3.4ghz" leak IDK. And for that I'd expect the smaller 32CU die to hit that, maaaybe, not the bigger one.

As for notebook SKUs I'd expect that if they were launching desktop versions in the next month or two they'd save notebook announcements for CES.

Perhaps but then they'd also end up burning a lot more power, which isn't the best option either. I would still be a bit conservative as the process also is practically the same.

No doubt, they usually save it for CES but that's not the issue. AMD simply has not been able to convince enough OEMs to use their GPUs and/or haven't been able to market as efficiently despite having competitive GPUs. The 6xxx series was a real chance for them to increase market share and I think they did a bit but the 7xxx series ended up being worse. AMD need to do better here.
Right but what I mean is what are the perf jumps and price points. Say N44 is roughly a $250 + $300 part for 8600/XT, with reasonable perf scaling and linear value that'd be a jump to $500 and $600 parts. Maybe they are really aggressive with prices, $200 + $250 then $400 + $500? Maybe with this added they can have a $379-$429-$499 price scheme, sell the worst ones (supply doesn't have to be great) and slightly bump up the price of the cut down die making more money than you otherwise would've with a price ladder that makes sense to spend the extra on it. That way you've artificially increased the price of your cut down N48

That sounds about right for N44 especially given likely 8 GB VRAM, but I don't see them being so aggressive with pricing with N48 if they can match or exceed 7900XT/4070 Ti Super performance in Raster/RT. I could see something like $599/$499/$449, in part to make the heavily cut down part less attractive and lower volume (Similar to RX 6800 XT and RX 6800 which had an MSRP of $649 and $579 respectively, despite a larger difference in performance).
 
I've just discovered that a strix is an animal
So...
Strix Halo: 16 cores, 4 memory channels?, 20CU, 256 bus, 32 IC?, 8533Mhz DDR
American holiday location: 4 cores, 2 memory channels, 64???, 6400Hhz DDR
 
I've just discovered that a strix is an animal
So...
Strix Halo: 16 cores, 4 memory channels?, 20CU, 256 bus, 32 IC?, 8533Mhz DDR
American holiday location: 4 cores, 2 memory channels, 64???, 6400Hhz DDR
I believe Strix Halo is 16 Z5 cores, 4MB L3 per core(?), 20WGPs (40CUs), 256bit bus, 32MB MALL (IC), 8533 LPDDR5X
Sonoma Valley is 4 Z5c cores, 2WGPs (4CUs), 64bit bus, 6400 LPDDR5

We already have leaked SS and chip info on Strix Halo being a 16c/32t part.
Some are suggesting the 4 is 4 Z5c cores and 20cores total... but that doesn't match info we already have.

Edit- Changed Strix Halo's "4" from 4SE to 4MB L3 per core. That seems to make more sense.

2nd Edit- I guess there is some information that there are 4 Zen5c cores on the IOD... that would better explain the Strix Halo "4".
I'm just confused at why those 4 cores wouldn't show up in the screenshots of benchmarks and chip info utilities.
 
Last edited:

Gonna go ahead and guess desktop launch is this year with this sudden burst of leaks from all over

Most of these "leaks" have the same info that was known months ago so it's not sudden as such. Sometimes it's just regurgitation of older info to get more clicks. That said I would also expect a launch in Q4 this year for at least N48.

I believe Strix Halo is 16 Z5 cores, 4MB L3 per core(?), 20WGPs (40CUs), 256bit bus, 32MB MALL (IC), 8533 LPDDR5X
Sonoma Valley is 4 Z5c cores, 2WGPs (4CUs), 64bit bus, 6400 LPDDR5

We already have leaked SS and chip info on Strix Halo being a 16c/32t part.
Some are suggesting the 4 is 4 Z5c cores and 20cores total... but that doesn't match info we already have.

Edit- Changed Strix Halo's "4" from 4SE to 4MB L3 per core. That seems to make more sense.

2nd Edit- I guess there is some information that there are 4 Zen5c cores on the IOD... that would better explain the Strix Halo "4".
I'm just confused at why those 4 cores wouldn't show up in the screenshots of benchmarks and chip info utilities.

Apparently they're not Zen5C but Zen5LP, which are further cut down from Zen5C. Maybe the reason they don't show up is they are strictly low power cores and cannot be used together along with the main cores?
 
Apparently they're not Zen5C but Zen5LP, which are further cut down from Zen5C. Maybe the reason they don't show up is they are strictly low power cores and cannot be used together along with the main cores?

Having a such low-power utility core seems like a good idea, at least if they can get proper OS support for putting background stuff there, but I don't really understand why you'd have 4 of them? Seems like a waste of silicon to me.
 
Having a such low-power utility core seems like a good idea, at least if they can get proper OS support for putting background stuff there, but I don't really understand why you'd have 4 of them? Seems like a waste of silicon to me.
Intel had 2 LP E's in Meteor Lake, in Lunar Lake there's 4 (LP comes from the fact it's on low power island)
 
No doubt, they usually save it for CES but that's not the issue. AMD simply has not been able to convince enough OEMs to use their GPUs and/or haven't been able to market as efficiently despite having competitive GPUs. The 6xxx series was a real chance for them to increase market share and I think they did a bit but the 7xxx series ended up being worse. AMD need to do better here.

This was one of the unfortunate consequences of Apple using their own SOCs for the entire Mac lineup.

If not for that, the 14-inch and 16-inch MacBook Pros, 24-inch iMacs, and Mac Studios would've been sporting 7600 XT and 7700 XTs. The MacBook Pros in particular are huge volume products.
 
This was one of the unfortunate consequences of Apple using their own SOCs for the entire Mac lineup.

If not for that, the 14-inch and 16-inch MacBook Pros, 24-inch iMacs, and Mac Studios would've been sporting 7600 XT and 7700 XTs. The MacBook Pros in particular are huge volume products.

That's a separate market altogether. In Windows laptops with a dGPU, AMD's market share is sub 10%. Despite having somewhat competent products and cheaper pricing.

Even in Windows laptops without dGPUs, AMD's market share is just about 20-25% despite having vastly superior APUs since at least Rembrandt if not Cezanne. They should have had at least 50% market share in APUs by now. But they've not been able to capitalize and have let Intel catch up with Lunar Lake. One could also argue that it took them a long time to gain significant server market share as well despite having significantly superior platforms and performance since Rome in 2019.

Take this with a grain of salt but this is supposedly from IFA:-

AMD still sucks as a OEM Partner "In discussions with the many manufacturers at IFA 2024, it also became clear that AMD is still struggling with many problems that the company has had for years. This is one of the reasons why new notebooks with AMD chips are almost never presented at the trade fair. AMD is still unable to deliver enough chips and, above all, quickly - a persistent problem that has been known for a decade now.Many large OEMs have therefore not pushed ahead with expanding their portfolios towards AMD, as there was no prospect of receiving large numbers of chips from AMD quickly. Manufacturers are secretly saying that AMD and its many partners have " left billions of US dollars on the table " over the years, one manufacturer told ComputerBase."

Source -
 
My understanding is that AMD couldn't (and probably still can't) match Intel in terms of OEM supports. When designing a product, there will be many different problems that will need proper supports, and Intel still is much better than AMD in this area. This is definitely where AMD must improve greatly. With the rumor that Intel is considering "selling PC client business" this is getting more important, otherwise the OEM's life will be much harder and as a result end users will be getting worse products and also late.
 
My understanding is that AMD couldn't (and probably still can't) match Intel in terms of OEM supports. When designing a product, there will be many different problems that will need proper supports, and Intel still is much better than AMD in this area. This is definitely where AMD must improve greatly. With the rumor that Intel is considering "selling PC client business" this is getting more important, otherwise the OEM's life will be much harder and as a result end users will be getting worse products and also late.

I mean, Intel isn't selling that (they're selling Altera, an FPGA business). But yeah AMD really needs to scale faster into laptops, they've had the engineering for years now, but I can't buy an AI 370 laptop with a really good keyboard and trackpad to save my life.
 
https://videocardz.com/newz/amd-udn...-gpu-architectures-successor-to-rdna-and-cdna

AMD announces unified UDNA GPU architecture — bringing RDNA and CDNA together to take on Nvidia's CUDA ecosystem

Jack Huynh [JH], AMD: So, part of a big change at AMD is today we have a CDNA architecture for our Instinct data center GPUs and RDNA for the consumer stuff. It’s forked. Going forward, we will call it UDNA. There'll be one unified architecture, both Instinct and client [consumer]. We'll unify it so that it will be so much easier for developers versus today, where they have to choose and value is not improving.
 
Basically AMD's way of saying they dont really care about gaming going forward.

While something like GCN was successful enough early on, it's very different days now. Trying to make some datacenter/AI-focused architecture still work for gaming is simply not gonna work out well, especially for as long as Nvidia keeps making dedicated graphics/gaming architectures. And I'm sure AMD knows that and just dont care.
 
Last edited:
It seems like the right move, long-term.
If you want developers to focus on your ecosystem, you need to show them the benefit with a large user/install base.
If you want people to buy your products to use the "cool new software" then you need the developers to create it.

I see it as an extension of HSA but internalized to create a base level of feature parity across all their products.
The base building blocks will be similar but there will still be differences and optimizations made to target different markets.
The backwards compatibility seems like it would be a huge benefit to have a decent stepping-off point when making a new revision/architecture.

Edit- It seems like a KISS move for everybody.
 
Last edited:
The optimist in me says combined R&D, software efforts, userbase etc should make this a good thing long term, the cynic in me says AMD sees their gaming efforts fall off a cliff and want to consolidate Radeon and give them HPC/AI scraps.. Hopefully it's a good thing long term. Wave32 or 64?

Tom's Hardware [TH], Paul Alcorn: So, with UDNA bringing those architectures back together, will all of that still be backward compatible with the RDNA and the CDNA split?

Jack Huynh [JH], AMD: So, one of the things we want to do is ...we made some mistakes with the RDNA side; each time we change the memory hierarchy, the subsystem, it has to reset the matrix on the optimizations. I don't want to do that.

So, going forward, we’re thinking about not just RDNA 5, RDNA 6, RDNA 7, but UDNA 6 and UDNA 7. We plan the next three generations because once we get the optimizations, I don't want to have to change the memory hierarchy, and then we lose a lot of optimizations. So, we're kind of forcing that issue about full forward and backward compatibility. We do that on Xbox today; it’s very doable but requires advanced planning. It’s a lot more work to do, but that’s the direction we’re going.

UDNA 6 then? I'm reasonably sure there have been rumours about AMD doing this for RDNA 5 too but been a while since I've bothered checking any
 
Back
Top