Leoneazzurro5
Regular
Which I/O pins?
Those for the VRAM
Which I/O pins?
The memory controllers are supposed to be on the GCDs, not on the cache dies.Those for the VRAM
The memory controllers are supposed to be on the GCDs, not on the cache dies.
some senior leak developer
They don't ...Moreover, there are patents from AMD which hint the opposite.
even in this topic, our senior leaker stated that IC and IO will be on 6nm and placed on MCD
I am quite sure to have seen other patents showing a different arrangement of the interposer with cache. Btw, we'll see what it really is when RDNA3 will launch. To me, having the membus on the compute chiplets is really a waste (I/O scales a lot worse than compute, and on expensive 5nm it will add on the costs. Moreover, adding the cache chip on the top will only worsen the thermal transfer on the hotter parts (the compute dies).as the heat spreader is on the top. But, as said, we'll see.
On the GPU side it could be that AMD is planning on extending their modularity options towards the cache chips, perhaps with them producing the same v-cache chips that can go towards either GPUs or CPUs, or having the same VCache chips serving different generations of GPUs, for example.
That means the I/O would need to not be inside the cache chips, as those same chips could be paired with SoCs that use very different memory technologies (DDR4, DDR5, LPDDR5, GDDR6, GDDR7, HBM2E, HBM3, etc.). By putting the PHYs inside the cache chips you're limiting the type of solutions those cache chips can be used on.
We're gonna see it in DC parts too later on.to the point where IC was seen as a viable alternative to it
Quite frankly, the only memory type used for high-performance GPUs today and in the foreseeable (short-medium term) future is GDDR6 (6X counting Nvidia solutions, but so far AMD did not show any sign of wanting to use that). DDR4, DDR5, LPDDR5 are for low performance solutions which quite probably will not need stacked cache. HBM is still expensive, to the point where IC was seen as a viable alternative to it. GDDR7 is not even a thing as today and next year. While I understand your point, the more and more the process node will shrink, the more the cost for I/O on the GCDs will increase (imagine almost the same area used for I/O on 5nm and 3nm, with the latter process being 30-40% more expensive...).
“We’ve worked with TSMC to optimize 5nm for high performance computing,” said Su. “[The new process] offers twice the density, twice the power efficiency and 1.25x the performance of the 7nm process we’re using in today’s products.”
There are zero reasons to put anything more than 16GB on a gaming videocard for the foreseeable future.I doubt AMD are going to be only putting on 16GB.