NVIDIA Kepler speculation thread

Still, if something was supposedly *far* more efficient...

Also makes you wonder why we don't see any 1.5GHz 7870s mopping the floor with the bigger chips... or not.

I would say that not only is power a limitation, but also clock scaling. Not all architectures scale perfectly (or even at all) with increased clocks.

The quickest example I can think of is the GF104, which had negligible increase in games (although I'd presume shader based and other applications were helped more) despite a generous increase in clocking.
 
There can also be speed paths in some part of the chip that will not work reliably at those speeds, or would need impractical cooling solutions and damaging voltage bumps to reach it.

It would seem to be a valid design tradeoff to improve per-clock capability or manufacturability in return for the the inability to scale to a clock no practical implementation would reach anyway.
 
There can also be speed paths in some part of the chip that will not work reliably at those speeds, or would need impractical cooling solutions and damaging voltage bumps to reach it.

It would seem to be a valid design tradeoff to improve per-clock capability or manufacturability in return for the the inability to scale to a clock no practical implementation would reach anyway.

It would also allow for better yields would it not? Overclockability isn't a surefire method and scaling isn't 1:1 nor is it parallel. Unless you can get parity for overclocking on every overclockable part.

What's also apparent in this case is that not all programs scale well with overclocks/higher frequencies at all. From a manufacturing persepctive, it's much more economical if not logical to make a wafer where it all can reach a certain clock rather than aim for the fastest clock for one die.
 

That's probably not the main reasoning.
GK106 has been MIA for the better part of the quarter and none of the leaks seem to have even mentioned it. That's why the 670M is a Fermi and the 660M is a GK108 on an unhealthy dose of 'roids :p

The 660Ti might be using a cut-off GK104 if further cost-cutting permits (geez, look at the 670 already), but anything lower than that is probably not a feasible endeavour now at 28nm pricing.

Plus, the 570 has been selling so well for a supposedly highend card that it's hard to believe that there's overstock (it's supposed to have wound down in production after the 7870 release)
 
There is no such thing as "overstock". Only wrong pricing.
Price reduction is the ultimate creator of demand. It is so efficient that no feature set can be even close in efficiency compared to fair pricing.
 
That's probably not the main reasoning.
GK106 has been MIA for the better part of the quarter and none of the leaks seem to have even mentioned it. That's why the 670M is a Fermi and the 660M is a GK108 on an unhealthy dose of 'roids :p

The 660Ti might be using a cut-off GK104 if further cost-cutting permits (geez, look at the 670 already), but anything lower than that is probably not a feasible endeavour now at 28nm pricing.

Plus, the 570 has been selling so well for a supposedly highend card that it's hard to believe that there's overstock (it's supposed to have wound down in production after the 7870 release)

There is no such thing as "overstock". Only wrong pricing.
Price reduction is the ultimate creator of demand. It is so efficient that no feature set can be even close in efficiency compared to fair pricing.

ok, interesting points of view, thank you
 
All this talk about the updated higher clocked Tahiti has me thinking. With the lower power of the GTX680 (190 watt, 170 typical) vs Tahiti…
One thing are numbers on paper, another one is real power consumption. Many reviews show 15-20W difference between GTX 680 a HD 7970.

what prevents Nvidia from doing the same and releasing a GTX685 that is a higher clocked GTX680 and has 6 & 8 pin power and a 225 watt max (210 typical) power usage.
1. yields / manufacturing capacity
2. GTX 680 already runs ~150 MHz higher than HD 7970
 
2. GTX 680 already runs ~150 MHz higher than HD 7970
But it also has a higher latency for the basic arithmetic operations. That probably means a longer pipeline which can be clocked higher all other things equal. Although, Kepler still lacks result forwarding which means register access is part of the RAW latency (in AMD's GPUs it is obviously not), but that does not explain all of the difference (4 versus ~10 cycles, without register access I would guess 4 vs. 6 as the number of pipeline stages, but can't know for sure of course).
 
Kepler GT 650M ships in Macbook Pro 15" and Next Gen Macbook Pro(Retina Display) 15". Big design wins for Nvidia.

So much for Kepler is unmanufacturable lies from Demerjian, just like his Fermi is unmanufacturable lies.
 
Kepler GT 650M ships in Macbook Pro 15" and Next Gen Macbook Pro(Retina Display) 15". Big design wins for Nvidia.

So much for Kepler is unmanufacturable lies from Demerjian, just like his Fermi is unmanufacturable lies.

WTH? :rolleyes:
He is just saying that GTX680 yield is bad and there is hard evidence all over the world. Don't you see that chip is overclocked from the factory like hell? What exactly are you talking about?

Shame on fruiti.co. Wrong decision, should have gone with AMD chips. :devilish:
 
WTH? :rolleyes:
He is just saying that GTX680 yield is bad and there is hard evidence all over the world.

The only thing hard here is what you have for AMD.

Don't you see that chip is overclocked from the factory like hell?
It comes in at 20 watts under a 7970. And if you want to talk factory overclocked all I need to say is 7970 Ghz Edition.

What exactly are you talking about?
Back at you.

Shame on fruiti.co. Wrong decision, should have gone with AMD chips. :devilish:
With you as a spokesman for AMD I can see why Apple left AMD.
 
http://micgadget.com/21980/apple-is-close-to-finally-updating-the-mac-pro/

Apple, went with ATI graphics last generation, and got burned. Flickering, artifacts, overheating, no display, the list goes on. The drivers ATI made for the Mac Pro to take the upgrade kit were also notoriously flaky. Even after Apple updated the drivers, many users still had issues, especially in professional tasks. And this cost Apple a significant amount of money they are looking to not repeat this again.

No one sane would use inferior AMD products after being burnt like that. And the fact Kepler swept up almost all of the Ivy Bridge notebook design wins is quite telling.
 
Software can be fixed, and it's not like nVidia drivers are without fault. However, software can't fix chips breaking what happened before they moved to ATI/AMD
 
No one sane would use inferior AMD products after being burnt like that. And the fact Kepler swept up almost all of the Ivy Bridge notebook design wins is quite telling.

Apple have always been a company that plays its suppliers off against each other. IIRC, they moved to AMD after Nvidia's disastrous bumpgate period. There was lots of talk then about how AMD swept up all the design wins and how Apple was never going to back to Nvidia.... until the next time...
 
Software can be fixed, and it's not like nVidia drivers are without fault. However, software can't fix chips breaking what happened before they moved to ATI/AMD

Replacing bump material should be a lot easier than rewriting graphics driver for new production batch.
 
Back
Top