NVIDIA Kepler speculation thread

The GE is an extra speed bin and a tweaked BIOS + some slightly faster RAM.
It is no more overclocked than a 3.2 GHz i7 is an overclocked 3.0 GHz i7.
This isn't the first time a graphics architecture had more than two bins, either.

Nor does a bin that skims from the top of the variation curve sufficient to remove the vanilla 7970, which along with the 7950 would cover the vast majority of the chips they produce.

Do you really think the vast majority of 7970's with a voltage bump won't easily make the GHz edition SKU (albeit with worse power characteristics)?

The point about GK104 is it is about to be cut down and used for a part that is clearly destined and marketed to sell an awful lot more.
 
Do you really think the vast majority of 7970's with a voltage bump won't easily make the GHz edition SKU (albeit with worse power characteristics)?
A lot of things can be acheived when you push a chip outside its validated parameters.
That's what overclocking does.
If the manufacturer's parameters for that chip don't need to be exceeded, then a faster chip out of the factory isn't being overclocked.
 
A lot of things can be acheived when you push a chip outside its validated parameters.
That's what overclocking does.
If the manufacturer's parameters for that chip don't need to be exceeded, then a faster chip out of the factory isn't being overclocked.

Ok I get what you're saying, but the point I'm trying to make is that AMD set their parameters a bit too low in the first place.

We know what the reasons were for it - nobody saw GK104 coming, everybody expected TSMC to make another mess of it. I feel that the top end card should be pushing the boundaries, and that AMD certainly aren't in a position to be cautious there. Hindsight is a wonderful thing though.

What I do feel is that the GHz edition should have been given the 7980 name and a clean break from the losing 7970. This is where AMD fails time and again. People should be talking about the 7980 being the fastest card, not the 7970 GHz edition...seriously who thought that was a good name? Even when it's shortened to GE it sounds more like it's a slower part, LE, SE etc...why do they do these things...
 
Right. It seems to me that AMD underperformed this generation, and as a result NV has been able to overprice their hardware compared to previously.

Cause this is Nvidia who have release their card first at 550$ maybe lol. most funny thing i have read this week i think . The only thing we can said, is: maybe if AMD have set the initial price lower, Nvidia will have adjust their price to this lower price when release their card 3 months later. ( And this is not a good point for AMD )

Ok I get what you're saying, but the point I'm trying to make is that AMD set their parameters a bit too low in the first place.

We know what the reasons were for it - nobody saw GK104 coming, everybody expected TSMC to make another mess of it. I feel that the top end card should be pushing the boundaries, and that AMD certainly aren't in a position to be cautious there. Hindsight is a wonderful thing though.

What I do feel is that the GHz edition should have been given the 7980 name and a clean break from the losing 7970. This is where AMD fails time and again. People should be talking about the 7980 being the fastest card, not the 7970 GHz edition...seriously who thought that was a good name? Even when it's shortened to GE it sounds more like it's a slower part, LE, SE etc...why do they do these things...

For make it simple, AMD wasnt know 28nm will be able to be clocked so high, ofc you can get some ES who come back from TSMC and you see it, but you dont know when the production start to ramp this is not an exception.. but when all the chips who come back seems to take easy the 1ghz +.. thats another story, AMD have set a default stock speed at 925mhz, really conservatively.. They could have ofc set it higher. But i believe this was due to a simple fact, they have take the worst case possible and decided the binning 1 was start there.
 
Last edited by a moderator:
Actually, for TPU, the GHz edition exists as much as any other factory overclocked GTX 680 or others.

Look there, no sign of any GHz editions. :mrgreen:

http://www.techpowerup.com/reviews/Sapphire/HD_7950_Flex/28.html

Did it really? Nvidia doesn't have anything faster than GK104 at this moment and Tahiti GE is at least as fast (maybe faster). In fact AMD has the single-chip performance crown this time, while Nvidia had it with GTX 580, GTX 480, GTX 285, GTX 280, 9800 GTX, 8800 Ultra and 8800 GTX. So, tell me more about AMD and its underperforming products ;-).

Perhaps you missed this nice post.
Please, read it, it explains a lot. ;)

There is far more evidence affirming it's existence than not, and continually treating everyone whom you disagree with as if you were Al Gore just makes this that much more enjoyable.

First off, it is a fact that the GTX 680 was renamed just prior to launch. It's intended launch name was the GTX 670ti, and card photographs released a few weeks prior to launch clearly show this. Please take note that the card itself is the same length as the cooler. All reference GTX 670's are much shorter than the cards cooler, thus that card could never be anything but the 680. There is also additional proof HERE that nVidia did indeed pull their true flagship card shortly after AMD released the HD7970, then re-labeled the GTX 670ti to be the GTX 680.

At the time there were many theories as to why nVidia would do this, but most rational thinkers (ie: not AMD Fanboys) figured that since the full-blown Kepler GK100 chip with its massive complexity, size, and production cost... didn't seem to be required to beat out AMD's flagship card, nVidia made a smart decision and pulled the card. The FACT that TSMC had all sorts of yield issues with 28mn as well as them making the mistake of signing too many production contracts... everyone knew high yield wafers were not coming anytime soon and more importantly... the price per wafer was extremely expensive when yields are low, making a massive GK100 far too costly.

Now... I'm sure some fanboys just read that and are trying to keep their heads from exploding, all while typing a reply that will no doubt say something to the extent of "TSMC didn't have yield problems, the only problem was with nVidia's faulty designs, blah blah...." while claiming that if 7970 cards were available, it must be nvidia's screw up.... blah...

The only problem with that line of irrational thinking is 6 months has now passed and every single official report from TSMC and nVidia, including all those very important legally binding reports to share holders and the Securities and Exchange Commission (SEC) showed that while there were yield issues at first, they were eventually sorted out and the subsequent product delays that followed were actually caused by overwhelming demand for GTX 680 combined with AMD, Qualcomm, nVidia, and another company ALL waiting in the same line at TSMC for 28nm wafers. The Steam Hardware Survey clearly proved that nVidia sales of Kepler were strong while AMD's fell completely flat once kepler hit the street.

What was the point of all that since it clearly was not about GK100 you might ask? So what if they renamed it "GTX 680" instead of "GTX 670 Ti"... that doesn't prove anything!

Well, clearly it releases pressure off NVIDIA to introduce a part based on its "big chip" based on the GeForce Kepler architecture (GK1x0). It could also save NVIDIA tons of R&D costs for its GTX 700 series, because it can brand GK1x0 in the GTX 700 series, and invest relatively less, on a dual-GK104 graphics card to ward off the threat of Radeon HD 7990, which it clearly has done since the 7990 is missing in action, and save (read: sandbag) GK1x0 for AMD's next-generation Sea Islands, slated for later this year, if all goes well for them.

There was also THIS link in which nVidia themselves confirmed the 680 was intended to be the 670ti.

As for more proof, I think the GPU's design speaks volumes. The FP64 performance of the GTX680 is 1/24th of its FP32 performance, compared to the GTX having FP64 performance at 1/8th of its FP32 performance. The GTX 560 Ti has a 1/12 FP64 rate -- it's common practice for both AMD and Nvidia to drastically cut FP64 performance on their more inexpensive offerings.

Third, GF100 and GF110 were ~525mm² parts that made up the GTX480/470 and GTX580/570 respectively. GF104 and GF114 were ~323mm² parts that made up the GTX460 and GTX560 Ti/560 respectively. GK104 is a 294mm² part that makes up the GTX680. The difference there is pretty damn obvious -- suddenly a modest die is being labeled as the top part, instead of being labeled as an upper mid-range part.

It's also pretty clear that with GK110 (the existence of which is official and undeniable) coming around later this year or early next year that Nvidia didn't simply abandon their gigantic dies. The mountain of evidence is undeniable -- GK100 existed at some point. There are FAR too many coincidences going on as well as clear patterns. It also makes absolutely zero sense that nVidia would at the very last moment change a product's name AND it's packaging. (that makes me wonder if my GTX 680 has a sticker over the place where early cards said GTX 780ti. I doubt nV would be so dumb to not just replace the whole cover)

Lastly, if nVidia had released the GK100, what would have they gained by completely annihilating anything and everything made by AMD?

Extra bragging rights? (they got this anyway)
Amd Fanboys crying themselves to sleep? (680 did that)
Fanboys suddenly claiming Perf/Watt was never important and the only thing that matters is GPGPU. ;) (heh... they did that too)

Truth be told, if nVidia had released GK100... it would have been corporate suicide. With a report out just today stating that AMD's Q2 profits are down 40% along with more delays for some Trinity desktop APUs as well as HD6990 (if that even happens)... oh yea, and you can't forget that massive failure called "bulldozer". I'm personally worried about AMD's future. Not only because it's good to have choices in which products I buy, but the fact if AMD's graphics division or AMD itself were to cease to exist or even stay remotely competitive.... I'm quite sure the U.S. Justice Department and SEC (along with our Business hating President) will immediately file anti-trust suits against both Intel and nVidia.

Anyways... time to get to sleep.


Edit: just wanted to add my own personal feelings about those of you who are in complete denial regarding GK100 ever exisiting. While it is indeed true that none of us have actually seen with our own eyes a GK100, the circumstantial evidence that I have listed is undeniable. And if you actually took a few minutes to search around, far more information is out there.

For someone to claim there is no proof of GK100's existence anywhere out there yet in just a few minutes I gathered a ton of evidence... the only possible excuse for such behavior can be one of two things. Either the person is blinded by fanboyism or just your typical self-entitled kid whose parents bought him a GTX 680 only to find out nVidia didn't "GIVE" him the GK100 of which he is deserving.


/logout

etc...

Yes, AMD underperformed compared to its previous cards.

According to computerbase.de performance rating, different resolutions (at release):
3870 -> 4870: 55-70%
4890 -> 5870: 50-60%
6970 -> 7970: 30-40%

AMD underperforming has nothing to do with Nvidia, it's a fact on its own. Numbers don't lie.

This. ;)

True and remember it was AMD who "overpriced their hardware compared to previous generation". Or do you not remember the $550 HD7970 or the $450 HD7950.

When Nvidia released the high end Keplers they priced them at $500 for the GTX680 and $400 for the GTX670 or $50 under the price gouging that AMD was doing. And because the GTX's were better all around it forced AMD to right price the HD7970 at $430 ($120 lower) and the HD7950 $350 ($100 lower).

This shows clearly that any blame for "overpriced their hardware compared to previous generation" it is AMD and if it wasn't for Nvidia competition AMD would still be gouging users.

I don't think the reason for lower prices from AMD is NV only, it is their obvious lack of ability to convince customers to buy their products.
 
Yes, AMD underperformed compared to its previous cards.

According to computerbase.de performance rating, different resolutions (at release):
3870 -> 4870: 55-70%
4890 -> 5870: 50-60%
6970 -> 7970: 30-40%

AMD underperforming has nothing to do with Nvidia, it's a fact on its own. Numbers don't lie.

Just for some perspective:
8800 Ultra > GTX 280: 41-57%
GTX 285 > GTX 480: 64-84%
GTX 580 > GTX 680: 29-33%
(computerbase.de ratings)

So excluding GTX285 > 480 jump AMD has still done better
 
Just for some perspective:
8800 Ultra > GTX 280: 41-57%
GTX 285 > GTX 480: 64-84%
GTX 580 > GTX 680: 29-33%
(computerbase.de ratings)

So excluding GTX285 > 480 jump AMD has still done better

And excluding GTX 580 > GTX 680 because GTX 680 is obviously a fraud. A big fake.
 
Just for some perspective:
8800 Ultra > GTX 280: 41-57%
GTX 285 > GTX 480: 64-84%
GTX 580 > GTX 680: 29-33%
(computerbase.de ratings)

So excluding GTX285 > 480 jump AMD has still done better

The 8800 Ultra was an extra halo card, that was incredibly expensive and had miniscule market share compared to the other "normal" highend cards on Nvidia and AMD side. I wouldn't necessarily count that one.
 
Just for some perspective:
8800 Ultra > GTX 280: 41-57%
GTX 285 > GTX 480: 64-84%
GTX 580 > GTX 680: 29-33%
(computerbase.de ratings)

So excluding GTX285 > 480 jump AMD has still done better

And excluding GTX 580 > GTX 680 because GTX 680 is obviously a fraud. A big fake.

Actually, no, put the GF114 > GK104 comparison. Is it even available to see how much the real benefits from the new process and architecture are?

Because those are not generational leaps but only refreshes on the same process node and therefore relatively uninteresting.

That's exactly what I wanted to write but something stopped me and in the next moment your post was available. ;)
 
Lies, damn lies, statistics.

The fact is that both manufacturers have stayed relatively close to one another over the years shows they have had similar improvements with node jumps. If one manufacturer was consistently pulling ahead with each node of technology the gap over the years would be increasing and that has most certainly not happened.
 
If one manufacturer was consistently pulling ahead with each node of technology the gap over the years would be increasing and..

... would lead to very bad financial condition of the other company. Which, actually, we observe but due to a third big monstrous company having the most severe impact.

Again, I have to quote this paragraph.

Truth be told, if nVidia had released GK100... it would have been corporate suicide. With a report out just today stating that AMD's Q2 profits are down 40% along with more delays for some Trinity desktop APUs as well as HD6990 (if that even happens)... oh yea, and you can't forget that massive failure called "bulldozer". I'm personally worried about AMD's future. Not only because it's good to have choices in which products I buy, but the fact if AMD's graphics division or AMD itself were to cease to exist or even stay remotely competitive.... I'm quite sure the U.S. Justice Department and SEC (along with our Business hating President) will immediately file anti-trust suits against both Intel and nVidia.

Anyways... time to get to sleep.

/logout
 
Lies, damn lies, statistics.

The fact is that both manufacturers have stayed relatively close to one another over the years shows they have had similar improvements with node jumps. If one manufacturer was consistently pulling ahead with each node of technology the gap over the years would be increasing and that has most certainly not happened.

It's been a yo-yo, with Nvidia ahead for a few years, then AMD ahead for a few. They are both pretty even now, I think Nvidia may be ahead on perf/watt with AMD still slightly ahead on perf/mm2, but this is as close as I can remember them both being.

It mostly feels like a win for Nvidia because of the huge deficit they've turned around in such a short time, but it's pretty silly to declare them winners when they still don't even have a midrange card out.
 
The comparison of GK104 to GF114 is pointless, nVidia couldn't deliver GK100, so GK104 is what they could cook up for this gen, GK110 vs whatever-top-of-the-line-Sea Islands-is is the next comparison point (from that to next-next Gen)
 
The comparison of GK104 to GF114 is pointless, nVidia couldn't deliver GK100, so GK104 is what they could cook up for this gen, GK110 vs whatever-top-of-the-line-Sea Islands-is is the next comparison point (from that to next-next Gen)
I think that's reading much too far into it. nVidia didn't deliver GK100. This doesn't necessarily mean that they couldn't. Unfortunately, we'd need some inside information to know more about why the GK100 never surfaced.
 
They probably decided to "shelve", or more accurately, delay GK100 as soon as they got word that AMD's biggest GPU would only be 300 mm^2 or so. No doubt they did a small run of GK100 engineering samples, since this gives them a lot of time to fix any problems that show up, so GK110 will get all the benefits of a refresh spin.
 
Back
Top