NVIDIA Kepler speculation thread

Zaphod said:
So what's the benefit, exactly? Apart from a much nicer looking graph in reviews trying to quantify micro-stutters by showing frame times?
Less microstuttering visible to the user maybe? Would that qualify as a benefit?
 
Indeed. But in this case they're not increasing frame rate. They're just adding the latency error on top of the fluidity of movement inaccuracy by delaying the "shortened" frame, no? So what's the benefit, exactly? Apart from a much nicer looking graph in reviews trying to quantify micro-stutters by showing frame times?

No, fixing micro-stuttering does not increase frame rate. It's AFR which increases frame rate. The problem is AFR introduces micro-stuttering, and that's why we need to fix micro-stuttering.

Let's review why there's micro-stuttering. With a naive implementation of AFR, the best case scenario is, with v-sync on, all frames take less than 1/30 seconds to render, then you have an average frame rate of 60fps and a latency of 1/30 second.

The problem comes when some frames (a "hard scene") take more than 1/30 second to render. In general, a hard scene have more than a few frames. So when it takes the first GPU 1/20 seconds to render a frame, a naive implementation of AFR still start the rendering of the next frame immediately on the second GPU (that's 1/60 second later). If both GPU takes 1/20 second to render their frames, you'll have micro-stuttering because the time between frames are: 1/60 second, 1/30 second, 1/60 second, 1/30 seconds, and so on.

To properly handle this situation is, of course, delay the rendering of the second frame. In the aforementioned example, it has to start the rendering @ 1/30 second later, i.e. to have a consistent frame rate of 30 fps. However, since it's impossible to know how long it takes to render the next frame, the display driver has to guess and that's why micro-stuttering is almost impossible to eliminate.

Since game engines generally use presentation call to sync its render thread, the display driver can delay the last presentation call to force the game engine to delay its rendering of the next frame. As FRAPS records rendering time @ presentation call, I think if FRAPS gives a consistent presentation time, it's safe to say that the user sees a consistent frame rate and animation.
 
The GTX 670 looks to be a quite nice card. Somewhat surprising that nvidia put the specifications so close to the 680. Well there's a fair difference in shader count, but performance with kepler (unlike earlier nvidia cards but just like AMD) now barely scales in most benchmarks with shader count. And both core/mem clocks are much closer to GTX 680 than the clocks are for HD7950 to HD7970 relatively. I guess there probably isn't much of a price difference between 5.5Ghz and 6Ghz gddr5 chips but still nvidia could have clocked it lower just to differentiate the cards a bit more. But I guess nvidia really wanted to achieve "faster than HD7950" with this card instead of just a draw (and they certainly could do it cheaply if you look at that reference pcb).
 
Less microstuttering visible to the user maybe? Would that qualify as a benefit?
Well, that's what I'm asking, isn't it... :) The Techreport article you mentioned above was a nice rundown as to why, as I thought, frame time graphs are now pretty much useless. But it's still unclear to me to what extent current games will show a visible benefit, or if some games will actually be worse. I've certainly seen a couple of places just praising their smooth looking fraps output.
 
As FRAPS records rendering time @ presentation call, I think if FRAPS gives a consistent presentation time, it's safe to say that the user sees a consistent frame rate and animation.
Well, no, I don't think so. Thus, I think you're misunderstanding what I'm asking/saying here. See the article mentioned by silent_guy.

 
Well, no, I don't think so. Thus, I think you're misunderstanding what I'm asking/saying here. See the article mentioned by silent_guy.

http://techreport.com/articles.x/22890/3

Well, unless that the time spent after presentation fluctuate a lot, the user should be able to see consistent frame time if FRAPS records consistent frame time. Or, unless you are suggesting that NVIDIA didn't actually fix any micro-stuttering, but just artificially makes the presentation time looks consistent. However, since the test results are not that consistent, I doubt that's the case (if NVIDIA wants to cheat they can easily do it with almost perfect results).

Of course, the only way to make sure is to set up a high speed camera, use a fast monitor, then compare its frames to see when a frame is actually shown. That's doable but probably not cheap.
 
Or, unless you are suggesting that NVIDIA didn't actually fix any micro-stuttering, but just artificially makes the presentation time looks consistent.
Not suggesting, asking. :) Their own quote seems to suggest it'll (currently?) only work on some game engines. Seems to me then, their technique could possibly also make things worse on other engines?
Of course, the only way to make sure is to set up a high speed camera, use a fast monitor, then compare its frames to see when a frame is actually shown. That's doable but probably not cheap.
And it seems TechReport wanted to do exactly that (but couldn't get it done in time).
 
Not suggesting, asking. :) Their own quote seems to suggest it'll (currently?) only work on some game engines. Seems to me then, their technique could possibly also make things worse on other engines?

Well, of course, even single GPU setup have frame rate fluctuations. How to handle that is up to the game engine itself. A naive implementation will have stuttering even on a single GPU setup. Of course, it will perform even worse on a multi-GPU setup.

To me, "fix micro-stuttering" means that, if a game performs well on a single GPU setup (i.e. its frame rate is more or less consistent), it should also perform well on a multi-GPU setup. To maintain a consistent frame rate on a single GPU setup is up to the game engine itself, but on a multi-GPU setup, the display driver will have to provide some extra help, by delaying presentation time to help the game engine meters frame time more accurately.

Personally I think the best usage for a multi-GPU setup is to do FSAA (such as NVIDIA's SLI FSAA modes). Trying to use two cheap GPU to challenge an expensive GPU, IMHO, does not make much sense.
 
See techreport 690 article: games often use multi frame averaging to calculate the next frame time. So this, combined with delaying only 1 out of 2 frames, should give much smoother performance.

This also means that every 2nd frame your inputlag goes up a lot, which should be even more irritating than having constant inputlag of x ms
 
What da... happens here? :D

http://www.newegg.com/Product/Produ...H&N=-1&isNodeId=1&Description=gtx+680&x=0&y=0

6 out of 7 in stock available for purchase.
Some are complaining that it is not worth it to throw 400 at such a small card (PCB wise). Well, they are right to some extent. ;)

And still no 680s available in stock. :???:

I find the closeness of the two SKUs notable as well.
Nvidia doesn't seem to think the 670 will cut into the 680's numbers, or it doesn't care too much if it does.

Well, there will be new batch of 680s along with the new 670s. Maybe somehow the new 680s will be faster to offset the small gap. Who knows?

I seem to remember you making a promise to improve the frequency of English-written pieces in your respected site , however the situation is becoming worse overtime , I can hardly spot any English article at all , as such your site has dropped out of my radar for quite sometime now , I really hope that will change soon .
:cry:

Google Translate doesn't help, does it?
 
I seem to remember you making a promise to improve the frequency of English-written pieces in your respected site , however the situation is becoming worse overtime , I can hardly spot any English article at all , as such your site has dropped out of my radar for quite sometime now , I really hope that will change soon .
:cry:

You could always learn German. :p
 
Kaotik said:
This also means that every 2nd frame your inputlag goes up a lot, which should be even more irritating than having constant inputlag of x ms
No, it doesn't. But it requires some advanced reasoning skills and disabling of knee-jerk reactions to see why this is so.

If the game renders the frames with an averaged time reference, the sample times at which it decides to construct a frame will be smoothed and equidistant (for similar load, which is to be expected in the short term).

If the GPU then messes his up by having different render times for alternating frames, then you get stutter: after all, frames that were supposed to be shown at relatively equidistant sample times are now presented at non-equidistant times. If the driver now selectively delays those frames to make them equidistant again, you return to situation that was intended by the game, and a much smoother experience with limited additional worst case lag and a constant average lag.

See? That wasn't so hard, was it?
 
6 out of 7 in stock available for purchase.
Some are complaining that it is not worth it to throw 400 at such a small card (PCB wise). Well, they are right to some extent. ;)


GTX 670 is finally getting us somewhere. Now I'd like to see 7950 down to 350 (maybe ~300 with rebates) and 7870 to 299 and less...sadly 7850 has no competition yet so it will probably stay.

But at least it is finally happening this gen is providing significant gains. 670 much faster than 580 for $150 less.
 
Arbitrary correlation that says nothing. AMD could be quiet for any number of reasons - including that they overestimated demand. How do you know the real reason without numbers to back it up?

So if all you had to do to get sufficient supply was to overestimate demand, then why didn't nv do it? BTW, it's interesting that as per your theory, TSMC failed to keep it's end of the deal only with NV, but not with AMD or Qualcomm.

What public information?
CEO level statements at CC,
 
Arbitrary correlation that says nothing. AMD could be quiet for any number of reasons - including that they overestimated demand. How do you know the real reason without numbers to back it up?

There are numbers though. AMD's graphics division posted a pretty decent profit of $34m in Q1. A lot of people are going to be scared out of their pants when they realise it was only $10-$20 million less than the whole of Nvidia who will post tomorrow.
 
One thing I know is that our 680 sample reduced it's Boost from 1124 to 1097 MHz when it's GPU temperature went above 70 °C (we made sure it did...). The 670 sample did go down from 1084 to 1071 (only one notch, mind you, and only above 80 °C).
Ditto. Our reference 670 does the same thing. The EVGA 670 also drops by one bin at 80C. Neither card drops at any other temperature (at least not until you hit the 98C thermal limit), and in fact in our testing the 670 doesn't even reach 80C at stock.
 
Since the 670 is still widely in stock at the end of launch day, I am surprised that I haven't yet heard Charlie and his fans saying that Kepler demand is low.
 
So if all you had to do to get sufficient supply was to overestimate demand, then why didn't nv do it? BTW, it's interesting that as per your theory, TSMC failed to keep it's end of the deal only with NV, but not with AMD or Qualcomm.

I'm not claiming that TSMC is meeting AMD's projected demand and failing to do so for nVidia. There's absolutely no information on which to base such a claim. What I'm saying is that AMD being quiet is not indicative of anything. They can be quiet because supply is great or because consumer demand is poor. I haven't seen any evidence of particularly high volumes of 28nm AMD parts being sold.

Since the 670 is still widely in stock at the end of launch day, I am surprised that I haven't yet heard Charlie and his fans saying that Kepler demand is low.

Give it time.
 
I'm not claiming that TSMC is meeting AMD's projected demand and failing to do so for nVidia. There's absolutely no information on which to base such a claim. What I'm saying is that AMD being quiet is not indicative of anything. They can be quiet because supply is great or because consumer demand is poor. I haven't seen any evidence of particularly high volumes of 28nm AMD parts being sold.
On the Q1 conference call AMD said something like supply is meeting demand, but there's no extra capacity to handle a situation of increased demand. Ideally you'd want extra capacity to be available should you need it.
 
Back
Top