NVIDIA Kepler speculation thread

If GK110 shouldn't be able to make it to desktop until say early 2013, I'm personally not so sure it'll make all that much sense. The only other problem for NVIDIA would be that if GK110 should stay HPC only I doubt they'll be able to cover the R&D expenses despite very high margins.
I don't see why it won't make it into the PC space. They already have a part that is in roughly that market segment: the GTX 690.
 
Last edited by a moderator:
At contrario the card is extremely low clocked ... ( just look benchmark score given for see it, ofc keeping in mind this is a Quadro not a Geforce )...
Are they using some serious low voltage GDDR5? It looks like the RAM runs at just half the frequency (750MHz, 3GBps) of the desktop parts. That appears to be too low, isn't it?
 
So where is your proof, hm?

There is far more evidence affirming it's existence than not, and continually treating everyone whom you disagree with as if you were Al Gore just makes this that much more enjoyable.

First off, it is a fact that the GTX 680 was renamed just prior to launch. It's intended launch name was the GTX 670ti, and card photographs released a few weeks prior to launch clearly show this. Please take note that the card itself is the same length as the cooler. All reference GTX 670's are much shorter than the cards cooler, thus that card could never be anything but the 680. There is also additional proof HERE that nVidia did indeed pull their true flagship card shortly after AMD released the HD7970, then re-labeled the GTX 670ti to be the GTX 680.

At the time there were many theories as to why nVidia would do this, but most rational thinkers (ie: not AMD Fanboys) figured that since the full-blown Kepler GK100 chip with its massive complexity, size, and production cost... didn't seem to be required to beat out AMD's flagship card, nVidia made a smart decision and pulled the card. The FACT that TSMC had all sorts of yield issues with 28mn as well as them making the mistake of signing too many production contracts... everyone knew high yield wafers were not coming anytime soon and more importantly... the price per wafer was extremely expensive when yields are low, making a massive GK100 far too costly.

Now... I'm sure some fanboys just read that and are trying to keep their heads from exploding, all while typing a reply that will no doubt say something to the extent of "TSMC didn't have yield problems, the only problem was with nVidia's faulty designs, blah blah...." while claiming that if 7970 cards were available, it must be nvidia's screw up.... blah...

The only problem with that line of irrational thinking is 6 months has now passed and every single official report from TSMC and nVidia, including all those very important legally binding reports to share holders and the Securities and Exchange Commission (SEC) showed that while there were yield issues at first, they were eventually sorted out and the subsequent product delays that followed were actually caused by overwhelming demand for GTX 680 combined with AMD, Qualcomm, nVidia, and another company ALL waiting in the same line at TSMC for 28nm wafers. The Steam Hardware Survey clearly proved that nVidia sales of Kepler were strong while AMD's fell completely flat once kepler hit the street.

What was the point of all that since it clearly was not about GK100 you might ask? So what if they renamed it "GTX 680" instead of "GTX 670 Ti"... that doesn't prove anything!

Well, clearly it releases pressure off NVIDIA to introduce a part based on its "big chip" based on the GeForce Kepler architecture (GK1x0). It could also save NVIDIA tons of R&D costs for its GTX 700 series, because it can brand GK1x0 in the GTX 700 series, and invest relatively less, on a dual-GK104 graphics card to ward off the threat of Radeon HD 7990, which it clearly has done since the 7990 is missing in action, and save (read: sandbag) GK1x0 for AMD's next-generation Sea Islands, slated for later this year, if all goes well for them.

There was also THIS link in which nVidia themselves confirmed the 680 was intended to be the 670ti.

As for more proof, I think the GPU's design speaks volumes. The FP64 performance of the GTX680 is 1/24th of its FP32 performance, compared to the GTX having FP64 performance at 1/8th of its FP32 performance. The GTX 560 Ti has a 1/12 FP64 rate -- it's common practice for both AMD and Nvidia to drastically cut FP64 performance on their more inexpensive offerings.

Third, GF100 and GF110 were ~525mm² parts that made up the GTX480/470 and GTX580/570 respectively. GF104 and GF114 were ~323mm² parts that made up the GTX460 and GTX560 Ti/560 respectively. GK104 is a 294mm² part that makes up the GTX680. The difference there is pretty damn obvious -- suddenly a modest die is being labeled as the top part, instead of being labeled as an upper mid-range part.

It's also pretty clear that with GK110 (the existence of which is official and undeniable) coming around later this year or early next year that Nvidia didn't simply abandon their gigantic dies. The mountain of evidence is undeniable -- GK100 existed at some point. There are FAR too many coincidences going on as well as clear patterns. It also makes absolutely zero sense that nVidia would at the very last moment change a product's name AND it's packaging. (that makes me wonder if my GTX 680 has a sticker over the place where early cards said GTX 780ti. I doubt nV would be so dumb to not just replace the whole cover)

Lastly, if nVidia had released the GK100, what would have they gained by completely annihilating anything and everything made by AMD?

Extra bragging rights? (they got this anyway)
Amd Fanboys crying themselves to sleep? (680 did that)
Fanboys suddenly claiming Perf/Watt was never important and the only thing that matters is GPGPU. ;) (heh... they did that too)

Truth be told, if nVidia had released GK100... it would have been corporate suicide. With a report out just today stating that AMD's Q2 profits are down 40% along with more delays for some Trinity desktop APUs as well as HD6990 (if that even happens)... oh yea, and you can't forget that massive failure called "bulldozer". I'm personally worried about AMD's future. Not only because it's good to have choices in which products I buy, but the fact if AMD's graphics division or AMD itself were to cease to exist or even stay remotely competitive.... I'm quite sure the U.S. Justice Department and SEC (along with our Business hating President) will immediately file anti-trust suits against both Intel and nVidia.

Anyways... time to get to sleep.


Edit: just wanted to add my own personal feelings about those of you who are in complete denial regarding GK100 ever exisiting. While it is indeed true that none of us have actually seen with our own eyes a GK100, the circumstantial evidence that I have listed is undeniable. And if you actually took a few minutes to search around, far more information is out there.

For someone to claim there is no proof of GK100's existence anywhere out there yet in just a few minutes I gathered a ton of evidence... the only possible excuse for such behavior can be one of two things. Either the person is blinded by fanboyism or just your typical self-entitled kid whose parents bought him a GTX 680 only to find out nVidia didn't "GIVE" him the GK100 of which he is deserving.


/logout
 
The picture of the alleged 670Ti was debunked as a fake if you look closely enough at the engraved numbers. This was discussed quite some time ago.

And even IF it were as you say, this is no proof of GK100 intended for release as GTX680 at that time. Nvidia could very well have clocked the GK104 even higher at the expense of power consumption to get the required 680 part. Also you don't take into account that there are always at least two SKUs, a "full" part and a "salvage part". If the naming convention were as you say, there is only a spot for one part, called "680". Or would they go with a "5" moniker, i.e. 675 and then 695 for the dual-SKU?

I never said, GK100 was pulled because it was not needed. GK100 most likely didn't exist in any presentable form, probably just a concept way way before the Kepler release.

I'm sure Nvidia thought about making and releasing a GK100 at some point in time, but I highly doubt this thought bore fruit in any concrete form. There is no proof at all that it existed as a chip/test chip or that it did tape out unsuccessfully. You have to differentiate there, because there are many many "levels of existence". Most likely Nvidia decided very early on in the Kepler design process to push the big die to the back of the line and release it when it can be actually made.

All your proof regarding a GK100 is completely inconclusive and just conjecture - you're dancing around the subject, disregarding the most essential question what we mean by "existence". I know people like to believe that there was an actual chip, existing in physical form, that was plagued by problems and was therefore canned. But there is no proof of that.
 
NV intended up to some point to announce GK104/Kepler at CES early this year, but canned the release claiming it was a pure business decision. Vendors for that release had a "GTX680/2GB" on their roadmaps they had received and yes tidbits of that launch material even circulated behind the curtains even with early board pictures that clearly showed 8 GDDR5 chips and hence a 256bit bus. Ironically many vendor employees back then thought it's the Kepler top dog, but it would had been quite absurd to have a high end part with only a 2GB framebuffer and even more so a 256bit bus.

There was no renaming of GK104 SKUs I've heard of.
 
There was no renaming of GK104 SKUs I've heard of.

It's not necessary to. :LOL:
But in any case, GK104 real name should be GT 650 Ti or GTX 660, and in no possible case GTX 670 and GTX 680. Its performance is that abysmal. Of course, you don't have (because Tahiti is too underwhelming) a real/ serious competitor to see it. ;)
 
It's not necessary to. :LOL:
But in any case, GK104 real name should be GT 650 Ti or GTX 660, and in no possible case GTX 670 and GTX 680. Its performance is that abysmal. Of course, you don't have (because Tahiti is too underwhelming) a real/ serious competitor to see it. ;)

The only thing that is abysmal about the 680 or 7970 are their respective prices for the performance they're delivering at the moment. In any other case - prices ignored - try to compare a GTX680 vs. a GTX560Ti in 3D and it's pretty obvious to see how "lacklustering" performance exactly is in terms of perf/mm2 and/or perf/W.

If the GTX680 had an MSRP of say 300 bucks you wouldn't obviously result to that conclusion. Given that a GTX690 on the other hand cuts you short by a grant, if GK110 would be shipping today under those pricing conditions it wouldn't most likely cost less than 7-8 hundred bucks and that's even conservative.

Comparing a 40G/530mm2 die on a GTX580 against a 28HP/294mm2 die on a GTX680, doesn't necessarily mean that you're comparing apples to apples.
 
The only thing that is abysmal about the 680 or 7970 are their respective prices for the performance they're delivering at the moment. In any other case - prices ignored - try to compare a GTX680 vs. a GTX560Ti in 3D and it's pretty obvious to see how "lacklustering" performance exactly is in terms of perf/mm2 and/or perf/W.
Right. It seems to me that AMD underperformed this generation, and as a result NV has been able to overprice their hardware compared to previously.
 
Right. It seems to me that AMD underperformed this generation…
Did it really? Nvidia doesn't have anything faster than GK104 at this moment and Tahiti GE is at least as fast (maybe faster). In fact AMD has the single-chip performance crown this time, while Nvidia had it with GTX 580, GTX 480, GTX 285, GTX 280, 9800 GTX, 8800 Ultra and 8800 GTX. So, tell me more about AMD and its underperforming products ;-).
 
Yes, not sure how that conclusion can be drawn when you consider the performances of die sizes such as GF110 relative to the die sizes Tahiti and Pitcairn.
 
Right. It seems to me that AMD underperformed this generation

True and remember it was AMD who "overpriced their hardware compared to previous generation". Or do you not remember the $550 HD7970 or the $450 HD7950.

and as a result NV has been able to overprice their hardware compared to previously
When Nvidia released the high end Keplers they priced them at $500 for the GTX680 and $400 for the GTX670 or $50 under the price gouging that AMD was doing. And because the GTX's were better all around it forced AMD to right price the HD7970 at $430 ($120 lower) and the HD7950 $350 ($100 lower).

This shows clearly that any blame for "overpriced their hardware compared to previous generation" it is AMD and if it wasn't for Nvidia competition AMD would still be gouging users.
 
Yes, AMD underperformed compared to its previous cards.

According to computerbase.de performance rating, different resolutions (at release):
3870 -> 4870: 55-70%
4890 -> 5870: 50-60%
6970 -> 7970: 30-40%

AMD underperforming has nothing to do with Nvidia, it's a fact on its own. Numbers don't lie.
 
In fact AMD has the single-chip performance crown this time
You mean the HD7970 GHZ Edition that took 6 months to produce?

You do realize that all it really is is an AMD approved factory overclocked card. That burns even more power and is louder than the original HD7970.

Also, the overclocked GeForce GTX 680 cards fared much better against the recently released Radeon HD 7970 GHz Edition than the reference cards, outpacing AMD’s latest more often than not.

Read more at http://hothardware.com/Reviews/GeForce-GTX-680-RoundUp-EVGA-Zotac-Gigabyte-Asus/?page=14#3IPIbkmbtJUw7fOT.99
If Nvidia really cared about the “Paper Title” of worlds fastest all they would need to do is also release an official factory clocked GTX680. But I believe they feel that it really doesn't matter (as shown in the reviews).
 
You mean the HD7970 GHZ Edition that took 6 months to produce?

You do realize that all it really is is an AMD approved factory overclocked card. That burns even more power and is louder than the original HD7970.

If Nvidia really cared about the “Paper Title” of worlds fastest all they would need to do is also release an official factory clocked GTX680. But I believe they feel that it really doesn't matter (as shown in the reviews).

If Nvidia had a bunch of faster 680's to release they would do. Instead they've spent the past few months cutting them down further in order to get something yield-worthy.

AMD increases clocks AND adds turbo to the 7970, and will probably do the same with the 7950. So far there has been no news of the 7930 6 months after Tahiti's release. Nvidia chops up GK104 even more 4 months later, and not only that but for a chip that is supposed to be their "champion" midrange card of the generation, and one that should sell many times more than the 670 and 680 combined.

Doesn't that tell you where each chip is? Doesn't that tell you that GK106 is underperforming? Nvidia is making a good job out of a bad situation and is being helped due to AMD's usual dithering marketing, but that's about it.
 
Yes, AMD underperformed compared to its previous cards.

According to computerbase.de performance rating, different resolutions (at release):
3870 -> 4870: 55-70%
4890 -> 5870: 50-60%
6970 -> 7970: 30-40%

AMD underperforming has nothing to do with Nvidia, it's a fact on its own. Numbers don't lie.

And what's the numbers if you swap out the 7970 for the 7970 GE and use catalyst 12.7?

AMD made a mistake in the 7970 was far too lowly clocked, and they paid for it. They should have EOL'd the 7970 and the 7970 GE should have been named the 7980 for marketing reasons.
 
The GE is an extra speed bin and a tweaked BIOS + some slightly faster RAM.
It is no more overclocked than a 3.2 GHz i7 is an overclocked 3.0 GHz i7.
This isn't the first time a graphics architecture had more than two bins, either.

Nor does a bin that skims from the top of the variation curve sufficient to remove the vanilla 7970, which along with the 7950 would cover the vast majority of the chips they produce.
 
And what's the numbers if you swap out the 7970 for the 7970 GE and use catalyst 12.7?

AMD made a mistake in the 7970 was far too lowly clocked, and they paid for it. They should have EOL'd the 7970 and the 7970 GE should have been named the 7980 for marketing reasons.

Then it would be a little better, but still fall short. To use the 12.7 would be unfair since the numbers I provided where with the release drivers and would be better with subsequent drivers as well. Add to that the increased power consumption and noise of the 7970 GE. I'm pretty sure this card would not have been well received 6 month ago.
 
Something that is "merely a factory overclock" would also not "take six months" and not fit within the same power profile as the original design. XT2 is the first result of an entire re-profiling of the ASIC, building a new production model and a completely new binning mechanism.
 
Back
Top