AMD: Southern Islands (7*** series) Speculation/ Rumour Thread

Do you think it's more or less likely that high-end Kepler will have a 512-bit bus over a 384-bit bus? (The latter is my "default" assumption, but I wouldn't be that surprised if they went 512-bit.) I'm asking because GF100/GF110's memory speeds are significantly lower than Cypress/Cayman's, and so NVIDIA would have more room for bandwidth increases without needing a wider bus or going to XDR2 (unless they have other issues). Even 256-bit at 6 GHz (IIRC the rated speed of 6970's memory) would match the 580's bandwidth (but I highly doubt they would go that route unless the high-end Kepler is GF114-sized or so, and even then…).

Kepler is a full node ahead of GF100, so you can expect a FLOPS increase close to 2×. The FLOPS/bandwidth ratio has been continually increasing over the last generations, so I wouldn't expect the same increase in bandwidth, but it should still go up by quite a bit to avoid severe bottlenecks.

Plus, running GDDR rated for 6GHz at this very speed is not easy, it requires a lot of effort on the memory controller and PHY—an area where NVIDIA doesn't particularly shine—and can be relatively inefficient, power-wise. So a 512-bit bus doesn't sound unrealistic at all.

Not that I have any inside information or personal experience of this, just my 2 cents.
 
Also RV670 was GDDR3/4 compatiable.
Heck, they fit a 512bit bus on R600 which was what? ~420mm2?
R600 graphics core was quite a bit smaller than the die size alone suggested. The memory interface padding at the perimeter was mostly stacked and took a generous portion of the IC area, which is the reason for the "miraculous" fitting of 512-bit interface in just 420 sq.mm at 80nm process. A very unbalanced design, later beaten by RV670.
 
Well if Cypress' MC is almost twice the size of Redwood's just to move from 4.2Ghz to 4.8Ghz and we have no idea how much larger Cayman's MC is to hit 5.5ghz maybe they thought it would be much easier to up the bus size and use a smaller/simpler, though slower, MC to get the bandwidth increase they wanted.
 
I worried about the memory limit as well. 7xxx series should required more bandwidth with a new design of SM.
a 384bit MC must be helpful.
 
And in this price range then the higher price Rambus is unlikely to play a role...

Is he sure? :LOL:
Sorry, but if the GDDR5 modules' in GTX680 price is 80 $, but the XDR2 modules for Tahiti XT are about ~150 $, then what are we doing?
That will give nvidia a strong and very noticeable advantage. ;)

When I google for XDR2, the only real, non-white paper products I find are Michelin tires.

I hope it stays that way.

+1. :)
 
Last edited by a moderator:
then what are we doing?

Pulling random numbers out of thin air, it would appear. Let's stop doing that. Another example is Anand's claims about the MCs in Redwood and respectively Cypress, which are bogus (only some bits of the MC actually abide that relationship, not it as a whole) - yet the Internetz is running with it with great speed, it would appear.
 
I'm pretty confident about the 80 $ number. It should be in that ballpark, if my memory serves well. I have seen a very nice table (which, of course I cannot find now but you can help me) with all the components' prices for Radeons and GeForces. :)
Ok, obviously you think that this 80% higher price is too much.
Then, what's your guess or information about prices?

I cannot find anything on Elpida, Hynix, Micron and Samsung websites about XDR2, no announcements about memory speeds or prices, nothing. :oops:
 
Actually XDR2 couldn't be used in Low-End cos the price.
And different MC may cause more money in redesign compared to a flexible desgin
 
If the 7xxx series is going to use xdr2, where are they going to get the ram from? I mean who is going to produce it? as apparently no one has even produced a sample of xdr2 yet. Or has 7xxx been moved back to late in 2012?
 
In my opinion, this whole "XDR2 for HD 7000 high-end" is BS.
It probably started as an enthusiast's speculation/wishful thinking/wet dream, was picked up by a site that of course didn't name any source, then other sites picked up that story for teh clickz, and now many of the enthusiasts among their readers apply the it's all over the net and I want it to be true, so it must be true™-filter and defend it vigorously.

My personal I learned the hard way that such rumors usually turn out to be a pile of goo™-filter tells me we'll probably see at most a 384-bit controller with mildly increased GDDR5 clocks on Tahiti. Best case.

But we won't see anything fancy until one of the following generations introduces an interposer + stacked memory.
 
i'm not for the xdr theory, but highend and lowend gpu haven't different mc yet?
It's like a desgin with modularization now.


In my opinion, this whole "XDR2 for HD 7000 high-end" is BS.
It probably started as an enthusiast's speculation/wishful thinking/wet dream, was picked up by a site that of course didn't name any source, then other sites picked up that story for teh clickz, and now many of the enthusiasts among their readers apply the it's all over the net and I want it to be true, so it must be true™-filter and defend it vigorously.

My personal I learned the hard way that such rumors usually turn out to be a pile of goo™-filter tells me we'll probably see at most a 384-bit controller with mildly increased GDDR5 clocks on Tahiti. Best case.

But we won't see anything fancy until one of the following generations introduces an interposer + stacked memory.
Exactly
 
Back
Top