AMD: RDNA 3 Speculation, Rumours and Discussion

Status
Not open for further replies.

xpea

Regular
Supporter
We already know nVidia is working on chiplets. So his entire idea than nvidia has no where to go is making no sense...
Exactly. Nvidia is working on it since 2015 with 3 generations of test dies (physical not only simulated) and multiple white papers / patents. In fact, Hopper has a chiplet "super GPU" version that was never revealed (yet?)
 

yuri

Regular
We already know nVidia is working on chiplets. So his entire idea than nvidia has no where to go is making no sense...
The same hype was made around HBM - Fury X gave AMD such a tremendous headstart leading to Vega failures and dropping the HBM from consumer products completely...
 

Bondrewd

Veteran
The same hype was made around HBM - Fury X gave AMD such a tremendous headstart leading to Vega failures and dropping the HBM from consumer products completely...
Why are you trying to bring up Vega here?
It has nothing to do with RDNA3 or 4.
Nvidia is working on it since 2015
Intel has been working on EMIB for effectively 15 years and the mass market products are still not here.
Next NV client parts are also single dies so idk your point.
 
Last edited by a moderator:

eastmen

Legend
Supporter
Exactly. Nvidia is working on it since 2015 with 3 generations of test dies (physical not only simulated) and multiple white papers / patents. In fact, Hopper has a chiplet "super GPU" version that was never revealed (yet?)

Test dies yes , but didn't AMD start shipping chiplets with Rome that was what 2018 ? I wonder if Nvidia will ship chiplets in a flagship product as their first chiplet release or if it wouldn't be used first in tegra or something
 

DegustatoR

Veteran
Test dies yes , but didn't AMD start shipping chiplets with Rome that was what 2018 ?
Intel has shipped "mass market" "chiplet" products back in 2007.

Considering that AMD doesn't actually produce their dies or chiplet systems it doesn't really matter much how much time ago they (or Nv) has been doing something with "chiplets" IMO. The whole production part is on TSMC and open to licensing by anyone.
 

Dampf

Regular

Gaming graphics revenue declined in the quarter based on soft consumer demand and our focus on reducing downstream GPU inventory. We will launch our next-generation RDNA 3 GPUs later this week that combine our most advanced gaming graphics architecture with 5-nanometer chiplet designs.

Our high end RDNA 3 GPUs will deliver strong increases in performance and performance per watt compared to our current products and include new features supporting high resolution, high frame rate gaming. We look forward to sharing more details later this week.

Sounds veerry interesting. New features supporting high res, high fps gaming? Could it be... are they working on their own version of frame generation already?! Please be true...!
 

JoeJ

Veteran
That a while back was in the age of xtors getting cheaper every other year.
Not so much anymore.
Yep, another reason. Currently we have 16GB ram for CPU and another 16GB for GPU, one of those pools mostly empty depending on workload. With unified ram, 16GB for both should do. That's a lot of xtors to spare, beside power and PCB.
They just need to find a proper compromise between low BW / low lantency CPU ram, and high / high for GPU. If that's still a problem at all. Consoles show the way.
Or put the ram on chip too, like Apple does. Even better.

Is it really just me who thinks this modular / replaceable parts PC platform is totally outdated? Or are i in the wrong forum, where enthusiast habbits traditionally dominate, so people keep ignoring the proper economical conclusions out of desire?
I think the PC industry has to react quickly, or some other platforms will push them out of consumer space within a decade.
APU desktop can be powerful enough for all consumer interests, but sadly i can not even find a 6800U laptop without a redundant dGPU. That's just silly.
 

Krteq

Newcomer
50% higher VGPRs capacity confirmed by MESA commits

mesa_vgprsugib1.jpg


 

eastmen

Legend
Supporter
Actually yea, that's how it works in MI300.
But that's an APU.
Because the thing is a day out and the slides have been distributed already?
meh , could just mean they aren't ready to show it yet and so wont mention it until its working as intented.
 
Is it really just me who thinks this modular / replaceable parts PC platform is totally outdated? Or are i in the wrong forum, where enthusiast habbits traditionally dominate, so people keep ignoring the proper economical conclusions out of desire?
I think the PC industry has to react quickly, or some other platforms will push them out of consumer space within a decade.
APU desktop can be powerful enough for all consumer interests, but sadly i can not even find a 6800U laptop without a redundant dGPU. That's just silly.

I've been wanting to see powerful APU's that can compete with midrange discrete GPU's for quite a while, and I thought we were on that track once HBM was introduced. I figured at the time, we're probably ~5 years away from at least AMD producing an APU, or some multi-package config with 4+GB of HBM to act as a cache and deliver truly competitive discrete performance. Obviously didn't happen for many reasons, cost being a big one. You're not seeing them discussed that much on here as there's just not any to talk about, at least ones that are even competitive with something like a 1060, let alone midrange GPU's.

I do agree that the current need to basically duplicate your main memory + all the other components for a large discrete GPU is putting the PC at a distinct cost disadvantage to consoles atm, and that disparity may just increase further as more bespoke solutions are turned to in order to deal with the slowing process node advancement. I tend to focus on the mid/low-midrange as well as I feel that is really where the bulk of the purchases are for PC gamers, and having that price range seemingly being an afterthought is just not sustainable in the long term for the health of PC gaming as a whole.

However, as I wrote here, there are a lot of differences and challenges with the open PC market as compared to the closed market of consoles/Apple wrt marketing a UMA. The big problem still remains - memory bandwidth. Infinity cache may provide a glimpse into how this problem can be mitigated for APU's, but there's a big difference for it providing that extra boost for a discrete card that already has 500GB+ to main memory vs 80GB/sec - and that's with fast DDR5. Apple 'solves' it by just creating ridiculously large chips that only they can afford to make, and consoles solve it by using GDDR6 which isn't affordable for the small amounts PC OEM's could order vs say, Sony locking down supply agreements for years due to knowing they will order 10+M chips a year.
 
Which is why they should focus to bring the latest GPU arch to APU early, not late.
I'm very happy to see they finally plan to do so.
But do we know about expected GPU specs for Phoenix/Strix already?
And is there any sign the same mindest might arrive at desktops too? I mean, beyond just 2CUs to do some video acceleration?

Supposedly there's 24 "compute units" in the high end APU. If the Angstromic stuff shows up as accurate tomorrow then that sounds right too, as that leask specified a multiple of 12CUs for the mid range die rather than a multiple of 16.
 

eastmen

Legend
Supporter
I've been wanting to see powerful APU's that can compete with midrange discrete GPU's for quite a while, and I thought we were on that track once HBM was introduced. I figured at the time, we're probably ~5 years away from at least AMD producing an APU, or some multi-package config with 4+GB of HBM to act as a cache and deliver truly competitive discrete performance. Obviously didn't happen for many reasons, cost being a big one. You're not seeing them discussed that much on here as there's just not any to talk about, at least ones that are even competitive with something like a 1060, let alone midrange GPU's.

I do agree that the current need to basically duplicate your main memory + all the other components for a large discrete GPU is putting the PC at a distinct cost disadvantage to consoles atm, and that disparity may just increase further as more bespoke solutions are turned to in order to deal with the slowing process node advancement. I tend to focus on the mid/low-midrange as well as I feel that is really where the bulk of the purchases are for PC gamers, and having that price range seemingly being an afterthought is just not sustainable in the long term for the health of PC gaming as a whole.

However, as I wrote here, there are a lot of differences and challenges with the open PC market as compared to the closed market of consoles/Apple wrt marketing a UMA. The big problem still remains - memory bandwidth. Infinity cache may provide a glimpse into how this problem can be mitigated for APU's, but there's a big difference for it providing that extra boost for a discrete card that already has 500GB+ to main memory vs 80GB/sec - and that's with fast DDR5. Apple 'solves' it by just creating ridiculously large chips that only they can afford to make, and consoles solve it by using GDDR6 which isn't affordable for the small amounts PC OEM's could order vs say, Sony locking down supply agreements for years due to knowing they will order 10+M chips a year.

Couldn't we just go with more channels? Intel used to have a tri channel ram set up. What is preventing them from doing so again or a quad channel solution ?
 
Status
Not open for further replies.
Top