Rootax
Veteran
We already know nVidia is working on chiplets. So his entire idea than nvidia has no where to go is making no sense..."Alright guys how's it going..."
We already know nVidia is working on chiplets. So his entire idea than nvidia has no where to go is making no sense..."Alright guys how's it going..."
That a while back was in the age of xtors getting cheaper every other year.Currently, but people said similar things about FPU a while back.
Not quite twice but it's an impressive piece of kit.Twice GPU power > Rembrandt?
Exactly. Nvidia is working on it since 2015 with 3 generations of test dies (physical not only simulated) and multiple white papers / patents. In fact, Hopper has a chiplet "super GPU" version that was never revealed (yet?)We already know nVidia is working on chiplets. So his entire idea than nvidia has no where to go is making no sense...
The same hype was made around HBM - Fury X gave AMD such a tremendous headstart leading to Vega failures and dropping the HBM from consumer products completely...We already know nVidia is working on chiplets. So his entire idea than nvidia has no where to go is making no sense...
Why are you trying to bring up Vega here?The same hype was made around HBM - Fury X gave AMD such a tremendous headstart leading to Vega failures and dropping the HBM from consumer products completely...
Intel has been working on EMIB for effectively 15 years and the mass market products are still not here.Nvidia is working on it since 2015
Exactly. Nvidia is working on it since 2015 with 3 generations of test dies (physical not only simulated) and multiple white papers / patents. In fact, Hopper has a chiplet "super GPU" version that was never revealed (yet?)
Intel has shipped "mass market" "chiplet" products back in 2007.Test dies yes , but didn't AMD start shipping chiplets with Rome that was what 2018 ?
The actual packaging yes but what do you do with it and how you build it is a whole other story.The whole production part is on TSMC and open to licensing by anyone
Gaming graphics revenue declined in the quarter based on soft consumer demand and our focus on reducing downstream GPU inventory. We will launch our next-generation RDNA 3 GPUs later this week that combine our most advanced gaming graphics architecture with 5-nanometer chiplet designs.
Our high end RDNA 3 GPUs will deliver strong increases in performance and performance per watt compared to our current products and include new features supporting high resolution, high frame rate gaming. We look forward to sharing more details later this week.
Yep, another reason. Currently we have 16GB ram for CPU and another 16GB for GPU, one of those pools mostly empty depending on workload. With unified ram, 16GB for both should do. That's a lot of xtors to spare, beside power and PCB.That a while back was in the age of xtors getting cheaper every other year.
Not so much anymore.
No.Could it be... are they working on their own version of frame generation already?!
Yeah but those are all blackboxes which is antithesis to PC as a concept.Consoles show the way.
Blame China.but sadly i can not even find a 6800U laptop without a redundant dGPU
Isn't that what Infinity Cache is for?They just need to find a proper compromise between low BW / low lantency CPU ram, and high / high for GPU. If that's still a problem at all. Consoles show the way.
What makes you so certain? I'm sure AMD did experiment with motion interpolation as well.
Actually yea, that's how it works in MI300.Isn't that what Infinity Cache is for?
Because the thing is a day out and the slides have been distributed already?What makes you so certain?
meh , could just mean they aren't ready to show it yet and so wont mention it until its working as intented.Actually yea, that's how it works in MI300.
But that's an APU.
Because the thing is a day out and the slides have been distributed already?
Is it really just me who thinks this modular / replaceable parts PC platform is totally outdated? Or are i in the wrong forum, where enthusiast habbits traditionally dominate, so people keep ignoring the proper economical conclusions out of desire?
I think the PC industry has to react quickly, or some other platforms will push them out of consumer space within a decade.
APU desktop can be powerful enough for all consumer interests, but sadly i can not even find a 6800U laptop without a redundant dGPU. That's just silly.
Yeah if you're ever gonna see such an x86 part it'll be confined to niche laptops.However, as I wrote here, there are a lot of differences and challenges with the open PC market as compared to the closed market of consoles/Apple wrt marketing a UMA.
Which is why they should focus to bring the latest GPU arch to APU early, not late.
I'm very happy to see they finally plan to do so.
But do we know about expected GPU specs for Phoenix/Strix already?
And is there any sign the same mindest might arrive at desktops too? I mean, beyond just 2CUs to do some video acceleration?
I've been wanting to see powerful APU's that can compete with midrange discrete GPU's for quite a while, and I thought we were on that track once HBM was introduced. I figured at the time, we're probably ~5 years away from at least AMD producing an APU, or some multi-package config with 4+GB of HBM to act as a cache and deliver truly competitive discrete performance. Obviously didn't happen for many reasons, cost being a big one. You're not seeing them discussed that much on here as there's just not any to talk about, at least ones that are even competitive with something like a 1060, let alone midrange GPU's.
I do agree that the current need to basically duplicate your main memory + all the other components for a large discrete GPU is putting the PC at a distinct cost disadvantage to consoles atm, and that disparity may just increase further as more bespoke solutions are turned to in order to deal with the slowing process node advancement. I tend to focus on the mid/low-midrange as well as I feel that is really where the bulk of the purchases are for PC gamers, and having that price range seemingly being an afterthought is just not sustainable in the long term for the health of PC gaming as a whole.
However, as I wrote here, there are a lot of differences and challenges with the open PC market as compared to the closed market of consoles/Apple wrt marketing a UMA. The big problem still remains - memory bandwidth. Infinity cache may provide a glimpse into how this problem can be mitigated for APU's, but there's a big difference for it providing that extra boost for a discrete card that already has 500GB+ to main memory vs 80GB/sec - and that's with fast DDR5. Apple 'solves' it by just creating ridiculously large chips that only they can afford to make, and consoles solve it by using GDDR6 which isn't affordable for the small amounts PC OEM's could order vs say, Sony locking down supply agreements for years due to knowing they will order 10+M chips a year.