Maybe it has something to do with the professional products?
Their problems in pro market aren't related to mem bw and would not be fixed in any way whatsoever by throwing wider bus at Tahiti.
Maybe it has something to do with the professional products?
Cypress and Cayman are so bad that the new chip shouldn't have much trouble being almost 2x as fast with a 256-bit bus.
Do you think it's more or less likely that high-end Kepler will have a 512-bit bus over a 384-bit bus? (The latter is my "default" assumption, but I wouldn't be that surprised if they went 512-bit.) I'm asking because GF100/GF110's memory speeds are significantly lower than Cypress/Cayman's, and so NVIDIA would have more room for bandwidth increases without needing a wider bus or going to XDR2 (unless they have other issues). Even 256-bit at 6 GHz (IIRC the rated speed of 6970's memory) would match the 580's bandwidth (but I highly doubt they would go that route unless the high-end Kepler is GF114-sized or so, and even then…).
R600 graphics core was quite a bit smaller than the die size alone suggested. The memory interface padding at the perimeter was mostly stacked and took a generous portion of the IC area, which is the reason for the "miraculous" fitting of 512-bit interface in just 420 sq.mm at 80nm process. A very unbalanced design, later beaten by RV670.Also RV670 was GDDR3/4 compatiable.
Heck, they fit a 512bit bus on R600 which was what? ~420mm2?
Start here:While it's possible that it might have escaped notice, I don't remember you arguing for it before.
When I google for XDR2, the only real, non-white paper products I find are Michelin tires.Shtal said:
And in this price range then the higher price Rambus is unlikely to play a role...
When I google for XDR2, the only real, non-white paper products I find are Michelin tires.
I hope it stays that way.
then what are we doing?
I'm pretty confident about the 80 $ number. It should be in that ballpark, if my memory serves well. I have seen a very nice table (which, of course I cannot find now but you can help me) with all the components' prices for Radeons and GeForces.
Actually XDR2 couldn't be used in Low-End cos the price.
And different MC may cause more money in redesign compared to a flexible desgin
It's like a desgin with modularization now.i'm not for the xdr theory, but highend and lowend gpu haven't different mc yet?
ExactlyIn my opinion, this whole "XDR2 for HD 7000 high-end" is BS.
It probably started as an enthusiast's speculation/wishful thinking/wet dream, was picked up by a site that of course didn't name any source, then other sites picked up that story for teh clickz, and now many of the enthusiasts among their readers apply the it's all over the net and I want it to be true, so it must be true™-filter and defend it vigorously.
My personal I learned the hard way that such rumors usually turn out to be a pile of goo™-filter tells me we'll probably see at most a 384-bit controller with mildly increased GDDR5 clocks on Tahiti. Best case.
But we won't see anything fancy until one of the following generations introduces an interposer + stacked memory.