AMD: R9xx Speculation

I think Fudo is pretty clueless when it comes to those things…

Indeed, especially considering that GF has even shown 28nm wafers with ATI chips on them already according to reports. It's more likely that TSMC will be the one getting 28nm chips out later, than vice versa, and we should get first chips out in Q1
 
do you mean these wafers?

globalfoundries_28nm_3i4z9.jpg


some sources claimed it was ATi's GPU, but other considered it to be Fusion...
 
Aren't those just test structures? It seems a bit early for AMD to be making anything else on 28nm, especially since I'd wager that 28nm designs haven't taped out, yet.
 
Even then, I think a full cypress and a full gf104 will have significant difference in performance.
Extrapolating from OC GTX460, I think a full GF104 at 800/1600/1000Mhz clock would be less than 10% behind HD5870 on average. Not sure if it would be possible for a viable product to clock it even higher (we haven't seen any fermi chips with higher memory clocks yet at all, and for higher core clock a voltage bump would be needed which might make things unreasonable). Of course, HD5870 has headroom too still I think a full GF104 with these clock would be a quite nice card without power requirements going through the roof - provided nvidia can actually make full GF104 chips in quantity. It would certainly be a much better card than GTX470...
 
Extrapolating from OC GTX460, I think a full GF104 at 800/1600/1000Mhz clock would be less than 10% behind HD5870 on average. Not sure if it would be possible for a viable product to clock it even higher (we haven't seen any fermi chips with higher memory clocks yet at all, and for higher core clock a voltage bump would be needed which might make things unreasonable). Of course, HD5870 has headroom too still I think a full GF104 with these clock would be a quite nice card without power requirements going through the roof - provided nvidia can actually make full GF104 chips in quantity. It would certainly be a much better card than GTX470...

Hum, I dunno… The GTX 460 1GB has a 160W TDP.

A very simplistic calculation gives: (384/336) × (1600/1350) × 160 = 217W, which is pretty much what the GTX 470 draws.

Sure, that doesn't take static power under account, but then again it also assumes no intra-die variability, and no voltage bump at all. I don't think it's too far off.
 
Hum, I dunno… The GTX 460 1GB has a 160W TDP.

A very simplistic calculation gives: (384/336) × (1600/1350) × 160 = 217W, which is pretty much what the GTX 470 draws.

Sure, that doesn't take static power under account, but then again it also assumes no intra-die variability, and no voltage bump at all. I don't think it's too far off.
Well, it would also be faster than GTX470, plus idle power consumption would also only be about half that of GTX470.
Also I think the GTX460 actually does a bit better in practice than on paper compared to GTX470, which routinely exceed their TDP (though not by as much as GTX480).
Take these numbers here for instance: http://ht4u.net/reviews/2010/zotac_geforce_gtx_460_amp/index12.php
The GTX 460 AMP (1024MB, 810Mhz/1000Mhz clock) draws roughly 180W, whereas the GTX470 draws 235W.
So while you're right the difference in load power draw wouldn't be dramatic (10% or so) it would also be 10% or so faster (that OC GTX460 is pretty much as fast as the GTX470 there already). Factor in the much better idle power draw, and that'll make it quite a bit more efficient card.
 
Even then, I think a full cypress and a full gf104 will have significant difference in performance.

True, but a full GF104 should be about on par with the 5850. That should pressure the 5870 too, since it wouldn't make sense for there to be a >$100 difference between it and the 5850.
 
I don't know. The front wafer looks very much as it would contain a six core CPU, albeit not in 32 or 28nm (i.e. it may be just a 45nm Thuban wafer, I'm too lazy to check the die size).

If we assume for a moment the front wafer is 45nm SOI, the middle wafer a 32nm SOI and the last one a 28nm bulk (with some test structures on it) and consider that on the middle wafer one can recognize a large cache structure (L3?) accompanied by 4 smaller blocks (with smaller caches), it could even be a 4 module Bulldozer die if it would have been taped out and did already the first run through the fab when the photo was taken ;)
 
Last edited by a moderator:
According to the old BSN article:

"Global Foundries representatives would not talk about what chips were on that wafer, but they were definitely not the test SRAM structures that we saw in June."
 
According to the old BSN article:

"Global Foundries representatives would not talk about what chips were on that wafer, but they were definitely not the test SRAM structures that we saw in June."

Ugh, BSN wouldn't know the difference between a more complicated test structure and a pubic hair even if it was explained to them by all of Intel's process engineers. Also, there is more to test structures than SRAM, so the fact that a rep purportedly said that those aren't the same SRAM test structures is hardly equivalent to there being some bombad new secret chips on those wafers, or even equivalent to saying that there's a working chip in there as opposed to all sorts of functional units routed and placed.

On another note, shouldn't we wait for GF to actually deliver anything before we jump up and down with joy over ATI making chips there? As far as I can see they're still a ways off from proving their viability as anything but AMD's foundry.
 
it could even be a 4 module Bulldozer die if it would have been taped out and did already the first run through the fab when the photo was taken ;)

Considering when BD actually taped out, that would have required a jiggawatt of power very close to the present day being used, and we've measured no such surge in power consumption recently:smile:
 
Ugh, BSN wouldn't know the difference between a more complicated test structure and a pubic hair even if it was explained to them by all of Intel's process engineers. Also, there is more to test structures than SRAM, so the fact that a rep purportedly said that those aren't the same SRAM test structures is hardly equivalent to there being some bombad new secret chips on those wafers, or even equivalent to saying that there's a working chip in there as opposed to all sorts of functional units routed and placed.

On another note, shouldn't we wait for GF to actually deliver anything before we jump up and down with joy over ATI making chips there? As far as I can see they're still a ways off from proving their viability as anything but AMD's foundry.

THOSE ARE NEW ATI GRAPHICS PROCESSORS AND STOP MAKING FUN OF MY FRIEND IT IS TOO A NEW GPU FROM ATI AT GF AND ITS COMING OUT NEXT MONTH!


omg, haxxorz.



Also, Dual 40nm GPU next generation ATI card this year:

http://www.fudzilla.com/graphics/graphics/new-dual-chip-card-comes-from-ati

We've just learned that ATI's next generation, let's call it Radeon 6000 series, is going to get a new high-end card. We are talking about a dual-chip card based on new 40nm chips that are expected to get slightly faster than the current generation. The best part is that it comes in 2010.

We see the new performance chips as a tweaked version of the highly succesful Cypress core, but this time they will introduce support for the new HDMI interface as well as new UVD – Eyefinity. Eyefinity will become more functional and cheaper to implement on multiple monitor setups.

Of course the core itself will be slightly improved, but we don’t expect a spectacular performance increase. It will be faster than the current generation, that much is clear, but we still don't know much about Nvidia's answer to this card.

Meh, same old same old...

I keep coming back to thinking about AMD's year on year progression of performance per watt. Keeping loosely to 'Moores Law' then next generation on 40nm will have to offer more performance for less power consumption. Performance of theoretical single precision TFLOP's, that is.

It's certainly possible to use 40nm process at TSMC better, is it possible to increase SP count without changing SP design, with better power consumption? Tall order to hit that perf/watt plot.

I'm ready to be surprised though (no, not like that, put your pants back on).
 
Found it. The GF person, who talked to BSN, probably missinformed them, because the waffers really aren't 28nm. Gipsel is right. Only the yellow wafer is 28nm (bulk, SRAM), the second one is 32nm (SOI) and the last one is 45nm (SOI, Thuban - or at that time called "45nm Istanbul").

I think it's quite likely, that the 32nm SOI product was Fusion. It isn't any 28nm GPU, definitely, so fudzilla is right, that we couldn't see any 28nm bulk wafer yet...
 
Would it make sense to stitch two Juniper class GPUs onto the same package ala the old Core 2 Quads from Intel? Sure the package would cost more but they could easily make up for that in only having to design one chip, not two and higher yields given they could span the Juniper level chip over 4 product classes:

67xx, 68xx and 69xx.

That way the perfect dies could all become 6870 and 69xx class chips, the dies which meet the leakage requirements but have a defect could become 6850, the dies which are higher leakage without defects could become 6770 dies and the ones which are defective and high leakage could become 6750 dies.
 
Would it make sense to stitch two Juniper class GPUs onto the same package ala the old Core 2 Quads from Intel?
No. Setup/rasterization is a hard serialization/choke point. And even an MCM will be like multi gpu sli/crossfire of today.

CPU's can scale with MCM's because they don't have any such problems to worry about.
 
No. Setup/rasterization is a hard serialization/choke point. And even an MCM will be like multi gpu sli/crossfire of today.

CPU's can scale with MCM's because they don't have any such problems to worry about.

What about the side port?

We had first generation sideport RV770.

Second generation sideport was scrapped, but obviously development continued even if the current chips don't feature it.

Third generation side port? What would that be able to do?
 
Back
Top