AMD: Southern Islands (7*** series) Speculation/ Rumour Thread

Also i don't believe the Nvidia rumours, the GK110 die size is just too large to be used in a higher production, lower price model.
Absolutely. It's one thing to produce a 520mm2 chip with no redundancy in high volume (GTX280, GTX580), but 550mm2 with redundancy is completely insane.

Right?
 
I can't imagine why they would, a refresh would bring reduced power consumption, possibly a smaller die and likely a bump in performance.

Also i don't believe the Nvidia rumours, the GK110 die size is just too large to be used in a higher production, lower price model.

Well the AMD website states 6.0gbps memory which is the same speed as the current 7970GE so that doesn't bode well for a faster core speed.
 
Well the AMD website states 6.0gbps memory which is the same speed as the current 7970GE so that doesn't bode well for a faster core speed.

Everything I have read around here about GDDR5 suggests that raising clocks becomes exponentially difficult/expensive (almost synonymous in the semiconductor industry) as you approach the "theoretical" limits (7 Gbps). From what I understand, the low hanging fruit has been picked, and seeing higher clocks are unlikely, as the investment becomes too costly for the gains.

If this is correct, it appears that Nvidia and AMD are in a bit of a rut until the release of DDR4 and GDDR6. Stacked memory is another technology that has the potential to be going into production within a year, and I imagine that GDDR6 and stacked memory are not mutually exclusive technologies. No doubt that stacked GDDR6 would be expensive as heck and economically unfeasible, but I'm sure the bandwidth numbers would be impressive.

Daydreams of absurd bandwidth aside, the OEM 8970 is relatively ancient news -- that information was passed around back in January. OEM lineups can and often are different from retail lineups. In some cases, like the GTX 100 series, the retail series may completely skip the logical nomenclature; so I wouldn't put too much emphasis on what AMD decides to name their OEM cards.

In fact, I think that getting hung up on what a product is named is rather ridiculous. We should be judging these products by their qualities, not by whatever AMD and Nvidia decide to call them.
Absolutely. It's one thing to produce a 520mm2 chip with no redundancy in high volume (GTX280, GTX580), but 550mm2 with redundancy is completely insane.

Right?
To clarify, you are being sarcastic, correct? GK110 is a big chip, and that means that it is going to be very prone to defects. There should be a pretty good number of chips that aren't fully functional (K20X and Titan themselves already have disabled logic). If Nvidia managed to produce a 580, 570, and 560 Ti 448 off of one 500+ mm2 chip, they should be able to produce two GeForce branded models with the 700 series as well. I also have been led to believe that GK110 Tesla products have been very popular, so Nvidia would logically have a lot of chips that "didn't quite make it" and release them as GeForce models.
 
Last edited by a moderator:
Sarcasm is a really great way to unambiguously convey information with a touch of fine humour, if you know what I mean.
It's unambiguous in the appropriate context. I'm not well enough educated on yields and redundancy, so silent_guy's comment went over my head. I'm those who are more knowledgeable about those subjects would have more easily picked up on the meaning of his statement.
 
I'm just joking by the way :)

Nvidia did have some trouble from having to sell that huge GT200/GT200b on the high mid range, by the way. They even canned it in the end. Just too much expensive but that's why they made the GF104, GF114 and GK104 after that.
 
OEM lineups can and often are different from retail lineups. In some cases, like the GTX 100 series, the retail series may completely skip the logical nomenclature; so I wouldn't put too much emphasis on what AMD decides to name their OEM cards.

Regardless, that's still the product name OEM's will list as being included in their PC's so customers will see that name and justifiably assume it to be faster than the 7970. So unless clock speeds a decent amount higher it's blatantly misinforming the consumer.

In fact, I think that getting hung up on what a product is named is rather ridiculous. We should be judging these products by their qualities, not by whatever AMD and Nvidia decide to call them.

That's exactly what I'm doing. I'm judging the "8970" on it's qualities, which I judge to be ridiculous if they are identical to the 7970. Of course the name is important, it's what the average consumer will use to indicate what level product they are purchasing, especially when you consider there are likely to me no benchmarks available for the 8970 since it's an OEM part so all consumers will have to go off is 7970 benchmarks or specifications that most of them won't understand.
 
I'm just joking by the way :)
I know, I caught that one. ;)
Nvidia did have some trouble from having to sell that huge GT200/GT200b on the high mid range, by the way. They even canned it in the end. Just too much expensive but that's why they made the GF104, GF114 and GK104 after that.
I think it's even more amazing that they were able to fab a chip so large back then. I don't think it was the best decision for them, as it opened up a big gap for AMD to hit Nvidia with much smaller and efficient dies, but it's quite an impressive feat.
 
Regardless, that's still the product name OEM's will list as being included in their PC's so customers will see that name and justifiably assume it to be faster than the 7970. So unless clock speeds a decent amount higher it's blatantly misinforming the consumer.
I doubt anyone would be buying a new OEM system is going to be buying a new one when they already have a 7970, so the worst case scenario where they're paying large amounts of money for no reason is a non-issue. Worst case, someone could have got last year's model slightly cheaper, with the same card.

Yes, it's not exactly ethical to play name games in order to swindle your customers. It's not really a huge issue here, though. I've gotten into this debate before, and it had to be severed into its own thread, so I'm not going to continue commenting on the morality of rebadging.
That's exactly what I'm doing. I'm judging the "8970" on it's qualities, which I judge to be ridiculous if they are identical to the 7970. Of course the name is important, it's what the average consumer will use to indicate what level product they are purchasing, especially when you consider there are likely to me no benchmarks available for the 8970 since it's an OEM part so all consumers will have to go off is 7970 benchmarks or specifications that most of them won't understand.
We're informed enthusiasts, though. We are, in a practical sense, immune to getting swindled by this. I think it's pretty universally accepted that rebadging is undesirable, so there's not really much point in complaining about it to each other. You can always count on there being mundane complaints about rebadging every time Nvidia and AMD release a new generation.

That discussion is uninteresting though. It's simple hiveminded circlejerking, preaching to the choir, etc.. Speculation and technical debates are so much more satisfying and enlightening.
 
I wouldn't hurt if people exercise some rational thought before throwing out statement. You don't need to be an insider to make a bunch of conclusions that have a high chance to be on the mark.

I don't know the defect rate of a 28nm process, but given the total absence of people complaining about it, I wouldn't bet it being worse than 40nm in any way.

We know that the great resurgence of Nvidia gross margins started with their Fermi line, despite the fact that their dies were significantly larger than AMD. We also know that their GTX 470, 570 and even 580 were available everywhere, the former even in the early days of 40nm.

We know that 28nm has been use successfully for very high volume products for most 18 months now. This process should be rock solid and mature by now.

With a GK110 only 5% bigger than a GF110, and the derived products all having at least 1 (and often more) redundant clusters, I fail to see how anyone can make an argument that it will yield at less than the same number of sellable dies per wafer than a GF110.

Yet somehow people keep on bringing up this thing about it not being producible. If that's what you believe, fine, but would you then at least explain the reasoning, other than "dude, it's big, duh", behind that statement?
 
think it's pretty universally accepted that rebadging is undesirable, ....
I think it's completely acceptable and even desirable to rebadge an existing sku from a higher to a lower segment as a new higher segment comes available. (Think GTX9800 becoming GTS250.) Much easier for a consumer to judge the relative performance position within the same product line.
 
With a GK110 only 5% bigger than a GF110, and the derived products all having at least 1 (and often more) redundant clusters, I fail to see how anyone can make an argument that it will yield at less than the same number of sellable dies per wafer than a GF110.

Yet somehow people keep on bringing up this thing about it not being producible. If that's what you believe, fine, but would you then at least explain the reasoning, other than "dude, it's big, duh", behind that statement?

I remember making that argument and it was dumb/unfounded/just wrong, I was saying they would not bother using gk110 for gaming at all but they did it just not at 500 euros (which itself is not that low of a price for a consumer product)
 
I don't recall either company ever renaming the same product to indicate it's a next generation part in the same performance bracket.

Nvidia's 4-series MX, 9800 GT etc? The difference with those was they were released into the consumer market as next-gen cards that could be bought off the shelf.

Frankly, if the "8970" really is the exact same GPU as the "7970" without at least a 10-15% clock speed increase then thats a really low move by AMD.

It's not really much worse than labelling mobile parts a model higher than they really are. It's a sign of the times, numbers need to increase every year to give the illusion of progress. Believe me the numbers will increase a lot more than the actual tech progress will over the coming few years.
 
I remember making that argument and it was dumb/unfounded/just wrong, I was saying they would not bother using gk110 for gaming at all but they did it just not at 500 euros (which itself is not that low of a price for a consumer product)
I think they can profitably sell a variant of this in, say, a $350 card. There's just no reason to do it.
 
The "competition" in the gaming graphics card industry isn't working very well these days.
 
I wouldn't hurt if people exercise some rational thought before throwing out statement. You don't need to be an insider to make a bunch of conclusions that have a high chance to be on the mark.

I don't know the defect rate of a 28nm process, but given the total absence of people complaining about it, I wouldn't bet it being worse than 40nm in any way.

We know that the great resurgence of Nvidia gross margins started with their Fermi line, despite the fact that their dies were significantly larger than AMD. We also know that their GTX 470, 570 and even 580 were available everywhere, the former even in the early days of 40nm.

We know that 28nm has been use successfully for very high volume products for most 18 months now. This process should be rock solid and mature by now.

With a GK110 only 5% bigger than a GF110, and the derived products all having at least 1 (and often more) redundant clusters, I fail to see how anyone can make an argument that it will yield at less than the same number of sellable dies per wafer than a GF110.

Yet somehow people keep on bringing up this thing about it not being producible. If that's what you believe, fine, but would you then at least explain the reasoning, other than "dude, it's big, duh", behind that statement?

My apologies for making a small mention of nvidia in the amd thread.
About the die size remark, i should have said "it's very unlikely that they will use a 500mm²+ die", my apologies for using absolutes.
 
That statement is still illogical.

Illogical? haha wow, really? it's not logical to think they will want to have smaller die sizes therefore increasing the amount of chips per wafer, if yields are similar?

If anything it's illogical to assume they will have the GTX780 die size nearly double that of the GTX680.
 
Back
Top