NVIDIA Kepler speculation thread

But they lose the market with highest margins- the professionals and supercomputers. No one sane will "upgrade" 580 with 680 in such clusters.
Why didn't they just shrink GF110 to obtain a smaller, faster in all cases and cheaper end product? Instead they did GK104 which sucks, of course it doesn't suck for a mainstream product, but for the high-end- definitely.
 
Ask Nvidia. I suspect purchasing decisions in the professional market, especially HPC, are longer term. Having nothing for a year or so isn't that bad. Also you have old Teslas that may very well sell for people who don't have them yet.
 
Of course, not only it existed but it still exists somewhere deep in their laboratories and for the sake of progress, it will be released in the improved form of GK110.

Then give me a reasonable explanation why they supposedly dumped such an expensive hypothetical design like the "GK100" in order to go str8 to the supposed GK110 "refresh". It was always NV's intentions afaik to release a performance part FIRST for the Kepler family of products and that performance part is GK104. It was never meant and never will be a high end chip.

There is no reasonable explanation as of why NV decided to take this ridiculous step backwards with GK104 in comparison to GF110.
GK104 is not a high end core full stop. Not without any HPC capabilities and not with a 294mm2 die area. What marketing and its resulting pricing did with the core has nothing to do with the fact that GK104 is, was and will be a performance part of the Kepler family.

GK10x are named like that because the don't carry the additional HPC related logic and capabilities of GK110. I know it's highly tempting to spin out of a simple digit difference any wild speculations based on not even a single indication, but if you think that NVIDIA can afford to dump multi million projects like a high end core without hurting you're obviously confusing it with Intel.

A 7.1b/550+mm2 chips wasn't simply manufacturable earlier than mid 2012 either way you turn it. Look at current manufacturing quantities, yields and what not for 28HP and the whole story is rather self-explanatory.
 
GKxxx are internal codenames and have nothing to do with marketing.

They were perhaps one day, but they've become part of marketing too for the semi-aware/informed people, as all the sites do a good job mentioning the codenames before and after the releases.
GF110 is GF110 because of marketing, too, it was GF100b originally (as proven by early BIOSes)
 
They were perhaps one day, but they've become part of marketing too for the semi-aware/informed people, as all the sites do a good job mentioning the codenames before and after the releases.
GF110 is GF110 because of marketing, too, it was GF100b originally (as proven by early BIOSes)

GF100b was renamed to GF110, because the latter original 32nm project had been cancelled for obvious reasons.
 
Then give me a reasonable explanation why they supposedly dumped such an expensive hypothetical design like the "GK100" in order to go str8 to the supposed GK110 "refresh".

You ask the question and then answer it alone. Dump or simply postpone?
I might be expensive but this can in all cases be considered as investment and the technology will almost surely be used one day.
I wonder why you do bother about if it is expensive or not. NV has lots of money to dump.

A 7.1b/550+mm2 chips wasn't simply manufacturable earlier than mid 2012 either way you turn it. Look at current manufacturing quantities, yields and what not for 28HP and the whole story is rather self-explanatory.
 
It is a little strange the 7970 gained no share, especially with price going quite down on standard editions (in Europe, you could find a SE 7970 for around 50-60€ less than the 680) and the GHz edition launch. As usual, these results should be taken with a grain of salt... (and, as usual, there is no sign of 670 and lower Kepler parts in the list...)

Why would you spend money on a 7970 right now when AMD made it perfectly clear that in a few weeks the 1ghz edition will be hitting stores.

AMD has a very bad habit of killing their own sales by either leaking or announcing their upcoming product, seemingly without any concern for their 3rd party manufacturing partners and retailers.
 
Isn't the GHz edition supposed to be at least a little bit more expensive?

I guess so.

But regardless... if it came down to waiting a few weeks and having to raise my budget $50 to get the newer card that isn't slower than GTX 680... I have zero doubt in my mind that anyone who is informed enough to know that the Ghz Edition is coming soon would NEVER buy one of the 7970's collecting dust at a retailer near you.

People who you'd typically expect to have no clue about this upcoming faster 7970 are your uninformed consumers who buy their PCs at Best Buy, and to be honest.... these clueless people hardly ever upgrade their video card and when they do it ends up being a mid-range level one at the very most.

So I still stand by my statement about how AMD killed their own product sales by paper launching a newer, faster, and better card while retailers still had large quantities of their current product lines in stock.

AMD would have been wise by waiting a few weeks and making the Ghz edition a hard launch in order for retailers to sell off what they had and not take a big hit if AMD does indeed plan on a price cut.
 
Why would you spend money on a 7970 right now when AMD made it perfectly clear that in a few weeks the 1ghz edition will be hitting stores.

AMD has a very bad habit of killing their own sales by either leaking or announcing their upcoming product, seemingly without any concern for their 3rd party manufacturing partners and retailers.

Because actually the 7970 is quite less expensive (at least in Europe) than a 680 and than the announced MSRP of teh 7970 GE while delivering only a few % less performance, maybe (not counting OC in the factor, which is also a good point for the 7970s)?
Also, by the same reasoning, the 670 could have been a 680 "internal" killer (but no sign of a 670 or lower on the list, maybe the claim about all the Kepler cards being detected as "680" by the Steam survey has some ground... )
 
There have been many occasions since at least the G80/G90 series that a GXy1z or G9z chip was released with no previous sign (or at least none that I saw) of a corresponding GXy0z or G8z. G92, G98, most of the GT21x, GF119, and GF117 are the ones I can think of. (Also GT216 was nothing like GT206/GT200b.)

However, if there really was no GK100, this would be the first time since at least G80 that a GXy10 chip was released with no GXy00 before it. Still, doesn't seem unlikely to me.
 
If you're into flashing, chances are that you're into overclocking as well, making a flash irrelevant.


Yes and no ... for peoples who dont want run overclocked cards with third party software who start to be a real headache when you use UPLS.. got a 1050mhz 7970 without having to use them is not so bad ..

Im an overclocker and i push the limit of my hardware for get some score under H2o, LN2 etc ... but i dont need overclock my cards for 24/24 use ( with 2x 7970 i really dont need it, whatever is the resolution ).

As bios editing is not possible yet on Thaiti and have a 1050mhz bios with turbo boost is not a bad thing. what is really interessant is not to flash or flash your card.. what is interessant is any AMD cards with the same system can got the turbo boost.. including the midrange 7870-7850 .
 
Last edited by a moderator:
But they lose the market with highest margins- the professionals and supercomputers. No one sane will "upgrade" 580 with 680 in such clusters.
Why didn't they just shrink GF110 to obtain a smaller, faster in all cases and cheaper end product? Instead they did GK104 which sucks, of course it doesn't suck for a mainstream product, but for the high-end- definitely.

Some months is not a tragedy...Amd has been lacking for years in that market for example...
 
Back
Top