NVIDIA Kepler speculation thread

No way Nvidia even reasonably tried to produce a 7bn transistor monster that early on in the deployment of the 28nm process. Everyone who believes that has to check his wits. Kepler has been in the making for 3 years now. It is believable that the decision to bring BigK so late was made in 2010, maybe even late 2009.

If you have a product more or less ready, you release it. Everything else is a terrible waste of money.
 
Last edited by a moderator:
I think that's reading much too far into it. nVidia didn't deliver GK100. This doesn't necessarily mean that they couldn't. Unfortunately, we'd need some inside information to know more about why the GK100 never surfaced.

You think they didn't want to release GK100 even if just for the HPC-markets and other professional markets, since 7970 beats the crap out of their current top-of-the-line GPU on those fronts (no matter if you consider GF110 or GK104 the top-of-the-line GPU) on everything except the fact it doesn't support CUDA?

There's 2 possible scenarios IMO, either they didn't want to do another Fermi-GF100 case, or GK100 was in even worse condition than GF100 ever was, and actually unproduceable
 
You think they didn't want to release GK100 even if just for the HPC-markets and other professional markets, since 7970 beats the crap out of their current top-of-the-line GPU on those fronts (no matter if you consider GF110 or GK104 the top-of-the-line GPU) on everything except the fact it doesn't support CUDA?
I don't think that's a major blocking issue. AMD did that by running the card at a significantly higher power envelope. If NV really wanted to beat that part, they clearly could, as the GF1xx line is quite a bit more efficient in die area and power consumption for given the 3D gaming performance.

The fact that the 7970 GHz Edition is now selling for so much less than the GTX 680 seems to indicate to me that most consumers agree: the 7970 GE is just not worth it because it's too hot, too noisy, and uses too much power. That or AMD didn't do a great job of evangelizing the 7970 and in most peoples' minds it's still slower than the GTX 680.

There's 2 possible scenarios IMO, either they didn't want to do another Fermi-GF100 case, or GK100 was in even worse condition than GF100 ever was, and actually unproduceable
There are other explanations as well. For example, after they found out what the 7970 was capable of, they could have realized, "Okay, our GK104 is fast enough to beat the 7970, so if we don't bring out the GK100, we can sell the GK104 as if it were the GK100. And we don't have to lose profits on the GK100's lower yields due to the larger die area." It's really easy, in my mind, to see why it would be so incredibly attractive to not bother with a GK100 if they didn't have to to beat the competition.
 
You mean the HD7970 GHZ Edition that took 6 months to produce?
It was released 3 months after GTX 680 just like GTX 680 was released 3 months after the original HD 7970.

You do realize that all it really is is an AMD approved factory overclocked card. That burns even more power and is louder than the original HD7970.
Well, using this logic, GTX 580 was nothing more than an OCed card, which was louder and more power-hungry than competitors HD 6970.

CB numbers:
GTX 580 consumed 43 W more than HD 6970 and offered:
+15 % @1920×1200 / MSAA 8×
+12 % @2560×1600 / MSAA 4×
+2 % @2560×1600 / MSAA 8×

HD 7970 GE consumes 43 W over GTX 680 and offers:
+15 % @1920×1080 / MSAA 8×
+7 % @2560×1600 / MSAA 4×
+20 % @ 2560×1600 / MSAA 8×

Quite interesting, isn't it? :)
 
I don't think that's a major blocking issue. AMD did that by running the card at a significantly higher power envelope. If NV really wanted to beat that part, they clearly could, as the GF1xx line is quite a bit more efficient in die area and power consumption for given the 3D gaming performance.
Due the GPGPU features requiring more power, see Pitcairn, perf/w (& mm2) equal or even better than GK104 (and it's still not as castrated as GK104 is GPGPU wise)
Making gaming only card at 7970 power envelope would be beating 680 silly.
The fact that the 7970 GHz Edition is now selling for so much less than the GTX 680 seems to indicate to me that most consumers agree: the 7970 GE is just not worth it because it's too hot, too noisy, and uses too much power. That or AMD didn't do a great job of evangelizing the 7970 and in most peoples' minds it's still slower than the GTX 680.


There are other explanations as well. For example, after they found out what the 7970 was capable of, they could have realized, "Okay, our GK104 is fast enough to beat the 7970, so if we don't bring out the GK100, we can sell the GK104 as if it were the GK100. And we don't have to lose profits on the GK100's lower yields due to the larger die area." It's really easy, in my mind, to see why it would be so incredibly attractive to not bother with a GK100 if they didn't have to to beat the competition.

Only in gaming arena.
 
Gk104 doesnt beat Tahiti really, it's more of a tie (ghz edition aka overclocking) or even a slim loss (when you realize Tahiti is faster at the most demanding conditions)

"Okay, our GK104 is fast enough to beat the 7970, so if we don't bring out the GK100, we can sell the GK104 as if it were the GK100. And we don't have to lose profits on the GK100's lower yields due to the larger die area."

Lose profits? They could price Gk100 at $1000, $700, or whatever and I cant believe it wouldn't be highly profitable, especially once yields got rolling (it's similar size to 580 after all...)...you assume somehow it means they have to drop GK104 price. It doesnt.
 
Yes, AMD underperformed compared to its previous cards.

According to computerbase.de performance rating, different resolutions (at release):
3870 -> 4870: 55-70%
4890 -> 5870: 50-60%
6970 -> 7970: 30-40%

AMD underperforming has nothing to do with Nvidia, it's a fact on its own. Numbers don't lie.
Why no 5870 -> 6970? You skipped a generation. And the presentation of numbers is often a lie. There's even a book about it.

There are other explanations as well. For example, after they found out what the 7970 was capable of, they could have realized, "Okay, our GK104 is fast enough to beat the 7970, so if we don't bring out the GK100, we can sell the GK104 as if it were the GK100. And we don't have to lose profits on the GK100's lower yields due to the larger die area." It's really easy, in my mind, to see why it would be so incredibly attractive to not bother with a GK100 if they didn't have to to beat the competition.
I'm sure Nvidia never taped out a GK100 and GK110 taped out around December so even if Nvidia discovered Tahiti's performance from engineering samples there's no way it affected the tape out of GK110 or GK100. It's likely Nvidia feels the competitive position of GK104 allows them to take their time shipping GK110 as a gaming part allowing them to focus on HPC where they can make more profit. If GK100 existed it was canceled long before Tahiti's performance was known.
 
The fact that the 7970 GHz Edition is now selling for so much less than the GTX 680 seems to indicate to me that most consumers agree: the 7970 GE is just not worth it because it's too hot, too noisy, and uses too much power. That or AMD didn't do a great job of evangelizing the 7970 and in most peoples' minds it's still slower than the GTX 680.
I would like to know how you're able to say this, when the product isn't even on the market yet.

Why no 5870 -> 6970? You skipped a generation. And the presentation of numbers is often a lie. There's even a book about it.
Obviously because they're both on the same 40nm process.
 
I would like to know how you're able to say this, when the product isn't even on the market yet.
Ahhh, whoops. I guess the listings I saw were for the vanilla 7970, even though I did a search for the GHz edition. That's sure going to confuse consumers down the line...

But it does make much more sense, given the relative performance of the parts.
 
Ahhh, whoops. I guess the listings I saw were for the vanilla 7970, even though I did a search for the GHz edition. That's sure going to confuse consumers down the line...

But it does make much more sense, given the relative performance of the parts.
Well, performance aside, people will pay a premium to join the green team.
 
I think that's reading much too far into it. nVidia didn't deliver GK100. This doesn't necessarily mean that they couldn't. Unfortunately, we'd need some inside information to know more about why the GK100 never surfaced.


Cause you believe Nvidia have put some hundred of millions of dollars for develop the GK100, and have really decided to put it on wait ? I know Nvidia have some money on the bank, but seriously i cant believe it.
You imagine the numbers of peoples who have work on it ? .

By delay the "kepler" peoples was waiting, they push back, Maxwell at least from 1 to 2 years .. We can ask us if this "kepler" GK100 was just simply not ready, after have need 2 generations for release Fermi " the right way" ( 1 year more of the initial plan ). They had allready lost 1 year on what was planned. The big luck they have got with the GTX500 is AMD was in the impossibility ( due to TSMC 32nm abandon ) to release too what was planned .
 
Last edited by a moderator:
Cause you believe Nvidia have put some hundred of millions of dollars for develop the GK100, and have really decided to put it on wait ? I know Nvidia have some money on the bank, but seriously i cant believe it.
You imagine the numbers of peoples who have work on it ? .
You think it's all gone to waste? That's only true if GK110 started from scratch.

TSMC's production can't bear a die that large right now, and especially not back when Kepler launched.
 
You think it's all gone to waste? That's only true if GK110 started from scratch.

TSMC's production can't bear a die that large right now, and especially not back when Kepler launched.

Ofc not.. but many money and time lost... Im not even sure they could have release it.. i think not.. Just by seeing how the thing have be done with 28nm production, release a monster of 500mm2 have certainly be a suicide...
 
Ofc not.. but many money and time lost... Im not even sure they could have release it.. i think not.. Just by seeing how the thing have be done with 28nm production, release a monster of 500mm2 have certainly be a suicide...
Agreed. I wonder if phase 1 of Gigafab 15 ever went online...
 
GK110 was always planned for AFTER GK104 AFAIK. However, it sounds like there was a one quarter delay or so.

The good news is that Nvidia learned their lesson regarding taping out huge chips on new process technology!

DK
 
GK110 was always planned for AFTER GK104 AFAIK. However, it sounds like there was a one quarter delay or so.

The good news is that Nvidia learned their lesson regarding taping out huge chips on new process technology!

DK
Well, yeah. 11x chips follow 10x chips. I.e., GK114 follows GK104.
 
Back
Top