NVIDIA Kepler speculation thread

So those tests are made by adjusting the power slider individually per game between 75% and 91%? (or did you set a fixed value from those findings)
No, it's an individual setting for each game and even each resolution. Sadly, this was necessary.

What is nvidia really promising? that it will sometimes be able to hit 980, or that it's the average clock in normal games?
IIRC something along these lines "The advertised boost clock is something, the average gamer should see more often than not in every day usage".

Having the test sample going 100mhz higher than specified seems quite suspicious.
Yes and no. I would think it depends on what a majority of the sold cards will hit and thus can be qualified only over time and with enough data samples. The high amount of OC cards and the high overclocks on those do not seem to point in that direction though.


I seem to remember you making a promise to improve the frequency of English-written pieces in your respected site , however the situation is becoming worse overtime , I can hardly spot any English article at all , as such your site has dropped out of my radar for quite sometime now , I really hope that will change soon .
:cry:
:(
Sorry I tried to convince my boss to give me at least some time to do that, but he's very reluctant. I was under the impression that the site was so desolated because no one wanted to do something, but it seems I was wrong and it's not profitable or something - dunno. :(

Your article doesn't load on my iPhone.
screenshot

Thx & confirmed, bug reported. I hope the tech guys are working on it. :(
 
Last edited by a moderator:
Hmm.. I wouldn't expect anandtech to be that naive..
http://www.anandtech.com/show/5818/nvidia-geforce-gtx-670-review-feat-evga/17
http://www.anandtech.com/show/5818/nvidia-geforce-gtx-670-review-feat-evga/20

You have an "evga superclocked" supposely binned edition that despite the overclock can't seperate itself from "ordinary reference card" from nvidia. Maaayybe nvidia had a slightly larger base to pick the review sample from than evga..
But not a word about this possible performance variance in neither the power/boost page or the conclusion.

IIRC something along these lines "The advertised boost clock is something, the average gamer should see more often than not in every day usage".

Isn't that only promising that the average (not worst case!) card should run 980mhz in the average game? (ofcourse we better take it from nvidias official wording)
The absolute worst case card, where you can't get you money back as it's not defective, would probably do 915mhz all the time (not that such card is very likely to be seen).

No, it's an individual setting for each game and even each resolution. Sadly, this was necessary.

So what you did was very close to a fixed clock at 980. Seems reasonable from the above promise. Would be quite time consuming to find the fixed power value that brings this card down to promised-average level.
 
Last edited by a moderator:
ocuk670fortresscpuok.jpg
 
Isn't that only promising that the average (not worst case!) card should run 980mhz in the average game? (ofcourse we better take it from nvidias official wording)
I should look up their wording, yes. But since it's advertised, every card sold should be able to hit it at least in some games.

So what you did was very close to a fixed clock at 980. Seems reasonable from the above promise.
Nvidia seemed not very entusiastic when asked about a tool or driver switch to have GPU Boost turned of or limited to an arbitrary value. For the end user, this isn't exactly necessary, since you can already do very much with Power and Frame Rate Target combined, but for reviews, well it'd saved us a few precious hours.
 
Kaotik said:
While of course reducing microstuttering is great, the added inputlag isn't that great.
The added delay would only be for 1 out of 2 frames (in case of bimodal behavior.) So it doesn't change worst case lag. And if it's true that games use averaging of a couple of frames of T_render, then it's also better in showing the frame exactly when needed. Overall, that sounds like a definite win.
 
Bouncing Zabaglione Bros. said:
So they got it wrong with Fermi, and then made the same mistakes again with Kepler?

That seems pretty hard to believe, but the pattern of what Nvidia have been saying about things has been exactly the same ie "no problems, things going to plan" then "big problems, poor yields, all TSMC's fault".
I have a hard time detecting a pattern: they never actually blamed TSCM for anything during the lifetime of Fermi. Can you point me to such a case?
 

Thanks, I do. Tridam did something similar: http://www.hardware.fr/articles/866-18/recapitulatif-performances.html

What is nvidia really promising? that it will sometimes be able to hit 980, or that it's the average clock in normal games?
Having the test sample going 100mhz higher than specified seems quite suspicious.

As far as I can tell, NVIDIA only promises that every card is qualified at 980MHz, and that it can therefore reach this frequency.

And btw, has anyone tested how much it affects performance to take the card inside a normal case instead of the open test bench?

Good question. And has anyone tested how real gaming sessions affect performance? Benchmarks usually last a couple of minutes at most, while gaming sessions last dozens of minutes, if not hours. That should make a pretty big difference in temperature. The more I think about it, the less I like NVIDIA's non-deterministic Turbo.

So much for Demerjian and his delusions about "less than 10k cards".

He wasn't talking about the 670.
 
I have a hard time detecting a pattern: they never actually blamed TSCM for anything during the lifetime of Fermi. Can you point me to such a case?

I think it's actually mentioned a few pages back in this thread. In an interview at Golem.de, Jensun says:

“The parasitic, umm, characterization from our foundries and the tools and the reality are simply not related. At all. We found that a major breakdown between the models, the tools and reality.”
 
Bouncing Zabaglione Bros. said:
I think it's actually mentioned a few pages back in this thread. In an interview at Golem.de, Jensun says:
I was under the impression we were talking about yield here. I wasn't aware there were characterization issues with Kepler.
 
Yeah, we've talked about this quite a while.



Good question. And has anyone tested how real gaming sessions affect performance? Benchmarks usually last a couple of minutes at most, while gaming sessions last dozens of minutes, if not hours. That should make a pretty big difference in temperature. The more I think about it, the less I like NVIDIA's non-deterministic Turbo.

One thing I know is that our 680 sample reduced it's Boost from 1124 to 1097 MHz when it's GPU temperature went above 70 °C (we made sure it did...). The 670 sample did go down from 1084 to 1071 (only one notch, mind you, and only above 80 °C).
 
Last edited by a moderator:
That's what I meant. Say you're aiming at the hand of my imaginary clock; by the time you're shown the 1.5 second image, the game world is already at the 2.0 second stage.

With AFR, you already have a 1/30 second latency for a 60FPS scene, so adding latency a fraction of a frame is probably not that a big deal, compared to the benefit of a more consistent frame rate.

I see this as a trade-off between latency and frame rate. Triple buffering is the same, so is AFR.

[EDIT] I think I know what you mean now. However, I suspect that FRAPS can only record the time when a frame is presented to the driver, so if FRAPS show a more consistent timing it's probably actually more consistent. After all, if the driver just delay the presentation time it's basically meaningless (i.e. the user will still see micro-stuttering).
 
I see this as a trade-off between latency and frame rate. Triple buffering is the same, so is AFR.
Indeed. But in this case they're not increasing frame rate. They're just adding the latency error on top of the fluidity of movement inaccuracy by delaying the "shortened" frame, no? So what's the benefit, exactly? Apart from a much nicer looking graph in reviews trying to quantify micro-stutters by showing frame times?
 
Yeah, we've talked about this quite a while.





One thing I know it that our 680 sample reduced it's Boost from 1124 to 1097 MHz when it went above 70 °C GPU temperature (we made sure it did...). The 670 sample did go down from 1084 to 1071 (only one noth, mind you, and only above 80 °C.

Thanks, good to know someone is actually testing this stuff! :)
 
After all, if the driver just delay the presentation time it's basically meaningless (i.e. the user will still see micro-stuttering).

But it also has to delay the presentation time - it doesn't help that you are being shown new frames at a constant interval if the game-time of the frame (which is probably sampled shortly after the previous present()) is varying.
 
He said GK104, though, not 680. I still believe Nvidia neglected the desktop market in favor of OEM deals. Add to that the tight supply of 28nm wafers in general and strong demand and you get why availability is so low.
 
Psycho said:
But it also has to delay the presentation time - it doesn't help that you are being shown new frames at a constant interval if the game-time of the frame (which is probably sampled shortly after the previous present()) is varying.
See techreport 690 article: games often use multi frame averaging to calculate the next frame time. So this, combined with delaying only 1 out of 2 frames, should give much smoother performance.
 
Back
Top