I'm not an expert either by far. I believe that the issue is not the size of the memory controller but the physical IO (pins or bumps at the back of the chip).Despite having a 256bit memory bus, the memory controllers on Pitcairn are quite small relative to the total die according to die photos. (Tahiti however is another matter) There should still be space around the perimeter a future die-shrunk GPU/APU to fit them in, especially if there is a sizeable CPU component on the same die:
I'm not an expert on these matters though. There still could be problems routing traces from the memory controllers to the board on a die shrunk GPU/APU.
If so I guess there is no needI wouldn't mind making the thread, but it seems like the specs leaked by bg's source is the only one that matters.
Even with two chips that a lot of power.Also, I agree with most of what you said but are we sure they are going to be starting off using a SoC? I always imagined they would start off with discrete chips and move to a SoC down the line for cost reduction.
I won't post picture of a Hd7850 cooling solution everybody knows how it looks, nor I'll post pictures of cooler for PC CPU for the same reason.
The HD7850 is reasonnably cool but the cooling system is consistent.
Even if SOny were to get its hand on a PD with say a TDP of 45Watts, that still means a pretty impressive cooling system (when you take in account both chips).
The measurement above are made within pretty big PC tower, if you pack all that in significantly tinier box. I don't know sounds a bit on the high side to me.
Even with the SoC I was considering I would not be surprised if the clock of the GPU would have to be adjusted down. I'm not sure one may want more that 100Watts from a single chip to dissipate in a not that big box.
As for the price reduction, especially through a shrink I wonder. It seems to get costlier and costlier at every stage R&D, wafers, etc. Actually looking at TSMC 28nm process after more than one year in production it seems that it's still not mature. Capacity might still be a bit problematic with the impact on price. One may wonder if sticking to a process may save one more money than pursuing shrinks. When is 22nm going to be available? I would bet before production is mature and capacity no longer constrained it could be 3 years or more from now.
It's my bet but Sony in their sucky financial situation should plan on being comfortable with the part. If they can shrink the chip after more than 4 years (for the second part of the product life) all the best but they have to be comfortable from scratch.
Last edited by a moderator: