Nvidia BigK GK110 Kepler Speculation Thread

Been using higher DPI settings on windows (XP, 7, 8) for two years or more. I would say I did encounter some problems with legacy software on windows Xp, but that's it.

Those reviewers quoted here paint a grayer picture than what's real IMO.
 
What I saw several days ago in front of me was quite disappointing. Full HD on a 15-inch laptop running Windows 7. Because of the relatively high resolution and the small display, you had terribly small letters... The picture itself was awful. Scaling was completely messed up. Does it mean that Windows 7 is not ready to adjust this in some user-friendly manner?

I've had a full HD 15" laptop for almost 3 years now - it looks great on Win7 with the right settings. I look forward to higher DPI displays.
 
I think they're happy enough, they just need to beat the competition, but not by as much margin as they possibly can. That's gonna be reserved for a speed bump along the way to the successor generation. More money for them.
 
A bit off-topic but here's a little more evidence of Kepler's weakness in general compute. CUDA accelerated raytracing in Adobe After Effects.

http://www.legitreviews.com/article/2127/1/

raytracing.png
 
I wonder if Nvidia is "happy" with GK110's current performance / watt, or if they plan (or have already done) another revision during the product's lifespan.

I thought at least one new revision was required for improving yields, frequency potential as well.
A1 revision?! Is there a revision A0 prior to it?
 
I thought at least one new revision was required for improving yields, frequency potential as well.
What you're saying is this: they tape out GK104, then GK107, then GK106. All 3 chips in 28nm with high volume where optimal yields are critical, yet after that they still didn't get the recipe right?

And, echos from the past, care to explain how you increase frequency by changing metal only? I really want to know the physics behind that.

My point being: metal spins are rarely, if ever, done to fix speed or yield issues. There is simply not much you can fix this way. Now ordinary logic bugs are something else. But since this is a 4th version of the same architecture, chances are that the worst bugs were ready flushed out.
 
Last edited by a moderator:
What you're saying is this: they tape out GK104, then GK107, then GK106. All 3 chips in 28nm with high volume where optimal yields are critical, yet after that they still didn't get the recipe right?

And, echos from the past, care to explain how you increase frequency by changing metal only? I really want to know the physics behind that.

My point being: metal spins are rarely, if ever, done to fix speed or yield issues. There is simply not much you can fix this way. Now ordinary logic bugs are something else. But since this is a 4th version of the same architecture, chances are that the worst bugs were ready flushed out.

GK110 has a few features that its smaller siblings lack (Dynamic Parallelism, Hyper Q, ECC…).
 
GK110 has a few features that its smaller siblings lack (Dynamic Parallelism, Hyper Q, ECC…).
Yes, I know that.
But if there were fatal bugs in earlier Kepler silicon, they would have shown up too in GK110 if it had taped out at the same time.
 
Ok, sorry, it is my fault really. I thought GK110 was GK100 but if the latter exists, then there is no problem... :D

And still, if I'm not right and GK110 is so late (without GK100), then how in hell did they know that GK104 would be enough to compete with Radeon HD 7970? :???:
 
And still, if I'm not right and GK110 is so late (without GK100), then how in hell did they know that GK104 would be enough to compete with Radeon HD 7970? :???:
Given that they released it 2 months after 7970, I'm pretty sure they knew at the time it was faster.

Before the release of 7970? Who knows. Why does it matter? One of the two is always going to be faster.
 
I guess there was the real possibility that Nvidia would not have had the performance crown - even initially. That would have been...weird.

Maybe with Maxwell they will pursue it a bit more aggressively, giving their performance GPU more raw power and bandwidth.
 
Before we have another rewriting of history, it's probably a good time to point out that of the 12 months last year, AMD held the performance crown for 9 of them (and still does).
 
Before we have another rewriting of history, it's probably a good time to point out that of the 12 months last year, AMD held the performance crown for 9 of them (and still does).

For single consumer GPU cards, that is, you can't count the unofficial 7990 models to this equation and cardwise, nV has 690
 
I thought at least one new revision was required for improving yields, frequency potential as well.
A1 revision?! Is there a revision A0 prior to it?
Nvidia revision numbering starts from "1", not zero. Which suggests that Nvidia got it pretty much right at the first time of asking ( 93% (K20X) and 87% (K20) enabled GPUs using 225-235W.

Not sure about the supposed lateness of the part. AFAIW, the timeframe for delivery was H2 2012. Deliveries started in the first week (or so) of September 2012
 
For single consumer GPU cards, that is, you can't count the unofficial 7990 models to this equation and cardwise, nV has 690

Well granted, however boxleitnerb knows what he meant and what he considers to be the real performance crown. I agree with him in that regard but it probably needs pointing out that AMD does hold the performance crown by this yardstick.
 
Back
Top