AMD: Sea Islands R1100 (8*** series) Speculation/ Rumour Thread

Or are you suggesting all Kepler refreshes in GTX 7xx -series will be GPGPU monsters?
Aside from gk110 being seen as refresh or not, I consider that a possibility (I wouldn't bet on it though). Well obviously they won't have that many SMX nor have high DP performance or ECC support, but any other improvements which should be in GK110 to make GPGPU workloads faster could be in there imho.
That wouldn't make them GPGPU monsters probably but at least it probably would be enough to make them not look silly in that area against GCN :).
 
I submit that the gk100 was canned before the 104 launched, so it was Nvidia's intent to release the gk100 first but was unable to do so. The gk110 will undoubtedly precede the launch of a gk114. (barring a failure to pull that off as well)
 
It's the highend because GK100 got canned.

That is an assumption without proof.

Logic dictates otherwise: That a "GK100" (i.e. a product that would launch in the very beginning of the 28nm cycle) never existed and instead that it was planned for a time when it was profitably manufacturable.

Ways of doing things change, especially when there is need to. And after the Fermi debacle, there certainly was need. It cost Nvidia dearly in market share and publicity. Why would they do such an insanity again? GF104 was very well received, so it is only logical that Nvidia said "you know what? Let's beef that part up and bring it first, circumventing the shitstorm of making a 550mm2 die on an immature process we would have to charge $500/die die for just to break even". Pure logic.
 
That is an assumption without proof.

Logic dictates otherwise: That a "GK100" (i.e. a product that would launch in the very beginning of the 28nm cycle) never existed and instead that it was planned for a time when it was profitably manufacturable.

Ways of doing things change, especially when there is need to. And after the Fermi debacle, there certainly was need. It cost Nvidia dearly in market share and publicity. Why would they do such an insanity again? GF104 was very well received, so it is only logical that Nvidia said "you know what? Let's beef that part up and bring it first, circumventing the shitstorm of making a 550mm2 die on an immature process we would have to charge $500/die die for just to break even". Pure logic.

Logic dictates that GK100 was at least planned and at some point they decided it was unmanufacturable or unprofitable to make manufacturable. Unlike Fermi, they didn't go ahead and attempt to release an unprofitable chip. Instead they canned it and moved on to its refresh. THAT is the change from the Fermi situation.

All you have to do is look at the entire history of Nvidia's graphics developement.

Sure they could pull an AMD and stop making big die chips, but I somehow don't see that happening just yet. What "is" possible is that they stop making big die chips for the consumer market, but even that is just a remote possibility, IMO.

Regards,
SB
 
Logic dictates that GK100 was at least planned and at some point they decided it was...

Sounds like it exists only in the form of a drawing... :LOL: Actually I am sure they have multiple cards running perfectly fine. :mrgreen:

What "is" possible is that they stop making big die chips for the consumer market

Everyone knows to write this... but I haven't ever seen anyone to care to comment deeply and try to analyse on the root cause of this tremendous problem.
How about blaming TSMC and its equipment suppliers (by the way, these equipment suppliers by themselves are also a very intriguing topic for discussion because of their demand for ever growing in exponential manner prices) for the insane wafer pricing?
 
Last edited by a moderator:
Logic dictates that GK100 was at least planned and at some point they decided it was unmanufacturable or unprofitable to make manufacturable. Unlike Fermi, they didn't go ahead and attempt to release an unprofitable chip. Instead they canned it and moved on to its refresh. THAT is the change from the Fermi situation.

All you have to do is look at the entire history of Nvidia's graphics developement.

Sure they could pull an AMD and stop making big die chips, but I somehow don't see that happening just yet. What "is" possible is that they stop making big die chips for the consumer market, but even that is just a remote possibility, IMO.

Regards,
SB

But at what point?

It is basically common knowledge that such a large chip is not doable at the very beginning of a new process. GK100 maybe was considered at some point, but beyond that, I don't know. It seems likely that any plan to release a GK100 was dropped as early as the full scope of the GF100 disaster became clear - that would have been 2009, up to 3 years before a potential release.

To "can" something means to me that it was well into development, maybe even close to manufacturing. I highly doubt GK100 ever was at this stage. I believe the decision not to release this early was made very very early.

I'm sure it was never Nvidia's intention not to make a large die chip, but only postpone it to a more suitable point in time. That would automatically mean that there would only one big die this generation, so no refresh. It would not be profitable to have such an expensive chip replaced a mere 6 months after its introduction.

It always makes me chuckle when people get so upset about this. They hear "Nvidias midrange beat AMDs highend" and automatically rush to the defence. They don't take into account that AMDs "highend" is pretty small and reasonable compared to Nvidia standards and that AMD has a similar approach with Pitcairn too - just a number smaller. So both statements/reactions are wrong in a way.
 
Sure they could pull an AMD and stop making big die chips, but I somehow don't see that happening just yet. What "is" possible is that they stop making big die chips for the consumer market, but even that is just a remote possibility, IMO.

Regards,
SB

amd dies are creeping back up anyway. tahiti is 365mm, and rumors put sea islands at a nice bump over that (i believe over 400mm)

but yeah, i think what happened at nvidia is they were just having too much problems getting their 500mm+ dies out the door. So GK110 came at 292mm somewhat on time, while a rumored 500+ chip doesnt show up.
 
Rangers said:
So GK110 came at 292mm somewhat on time, while a rumored 500+ chip doesnt show up.
Typo? GK104 is 292 mm^2, GK 110 is over 500 mm^2 and will certainly be sold to the consumer market eventually.
 
I always wonder how this kind of statement has managed to become common knowledge, especially considering that it's far from being accurate.

If you're referring to GT200, that was the 65nm node. As we get smaller, nodes get more problematic and it takes longer for them to mature. Isn't it logical that after Fermi Nvidia would not make the same mistake and play it safe?
 
Typo? GK104 is 292 mm^2, GK 110 is over 500 mm^2 and will certainly be sold to the consumer market eventually.

Yeah.

But what good is "eventually" if the features dont even keep up? Sooner or later we move to DX 11.1 and whatever else. The next AMD cards will likely close or even eliminate it's performance advantage as well. The window for GK110 to be relevant is slim and closing rapidly.

Also I dont accept that it's a certainty to be sold to consumers eventually. I dont know that.
 
If you're referring to GT200, that was the 65nm node. As we get smaller, nodes get more problematic and it takes longer for them to mature. Isn't it logical that after Fermi Nvidia would not make the same mistake and play it safe?

Things get certainly easier with time, but there are too many unknowns to call this alone "logical". While GF100 had some troubles, do we know if it was so dramatic? How about manufacturing capacity constrains, or taking more time to design gpgpu part? AMD pushed Tahiti first and it ain't exactly a small chip either.
 
Anyway, in the context of Fermi it is only natural that there would not be a GK100. I think we will see this pattern with Maxwell, too. Small die first, large die later and no refresh for the latter.
 
I wonder if it true that the 8000 series gives us a bigger die size from the 7000 series. If we are still on the same 28nm process, perfected, should lead to better overall performance per watt. However, did the 8980's GCN Architecture performance improve on the 7970 at the same specifications?
 
Rangers said:
The window for GK110 to be relevant is slim and closing rapidly.
No it isn't. AMD can't move to 20 nm any sooner than Nvidia. Just based on rumored specs, it will almost certainly outperform the HD8970 or whatever it ends up being called. Of course, GK110 will have a larger die, so that is to be expected.

Rangers said:
Also I dont accept that it's a certainty to be sold to consumers eventually. I dont know that.
OK...
 
No it isn't. AMD can't move to 20 nm any sooner than Nvidia. Just based on rumored specs, it will almost certainly outperform the HD8970 or whatever it ends up being called. Of course, GK110 will have a larger die, so that is to be expected.

OK...

Well the only rumors out for an AMD card is for a HD 8870 card, we don't know what AMD has up for it's top tier.
 
From VR-Zone: "AMD Next Generation Codenames Revealed: 2013, 2014, 2015 GPUs Get Names."

We already knew Sea Islands (2013), but apparently there's also Volcanic Islands (2014) and Pirates Islands (2015).

Yes, you've read it correctly. Bringing a tribute to legendary pirates such as the Blackbeard, Captain Hook or well, Captain Jack Sparrow, AMD's imaginative engineers are targeting the 20nm process with 14nm APUs in mind.
 
Last edited by a moderator:
Back
Top