Less microstuttering visible to the user maybe? Would that qualify as a benefit?Zaphod said:So what's the benefit, exactly? Apart from a much nicer looking graph in reviews trying to quantify micro-stutters by showing frame times?
Less microstuttering visible to the user maybe? Would that qualify as a benefit?Zaphod said:So what's the benefit, exactly? Apart from a much nicer looking graph in reviews trying to quantify micro-stutters by showing frame times?
Indeed. But in this case they're not increasing frame rate. They're just adding the latency error on top of the fluidity of movement inaccuracy by delaying the "shortened" frame, no? So what's the benefit, exactly? Apart from a much nicer looking graph in reviews trying to quantify micro-stutters by showing frame times?
Well, that's what I'm asking, isn't it... The Techreport article you mentioned above was a nice rundown as to why, as I thought, frame time graphs are now pretty much useless. But it's still unclear to me to what extent current games will show a visible benefit, or if some games will actually be worse. I've certainly seen a couple of places just praising their smooth looking fraps output.Less microstuttering visible to the user maybe? Would that qualify as a benefit?
Well, no, I don't think so. Thus, I think you're misunderstanding what I'm asking/saying here. See the article mentioned by silent_guy.As FRAPS records rendering time @ presentation call, I think if FRAPS gives a consistent presentation time, it's safe to say that the user sees a consistent frame rate and animation.
The other problem is the actual content of the delayed frames, which is timing-dependent. The question here is how a game engine decides what time is "now." When it dispatches a frame, the game engine will create the content of that image—the underlying geometry and such—based on its sense of time in the game world. If the game engine simply uses the present time, then delaying every other frame via metering will cause visual discontinuities, resulting in animation that is less smooth than it should be. However, Petersen tells us some [my emphasis] game engines use a moving average of the last several frame times in order to determine the "current" time for each frame. If so, then it's possible frame metering at the other end of the graphics pipeline could work well.
A further complication: we can't yet measure the impact of frame metering—or, really of any multi-GPU solution—with any precision. The tool we use to capture our performance data, Fraps, writes a timestamp for each frame at a relatively early point in the pipeline, when the game hands off a frame to the Direct3D software layer (T_ready in the diagram above). A huge portion of the work, both in software and on the GPU, happens after that point.
Well, no, I don't think so. Thus, I think you're misunderstanding what I'm asking/saying here. See the article mentioned by silent_guy.
http://techreport.com/articles.x/22890/3
Not suggesting, asking. Their own quote seems to suggest it'll (currently?) only work on some game engines. Seems to me then, their technique could possibly also make things worse on other engines?Or, unless you are suggesting that NVIDIA didn't actually fix any micro-stuttering, but just artificially makes the presentation time looks consistent.
And it seems TechReport wanted to do exactly that (but couldn't get it done in time).Of course, the only way to make sure is to set up a high speed camera, use a fast monitor, then compare its frames to see when a frame is actually shown. That's doable but probably not cheap.
Not suggesting, asking. Their own quote seems to suggest it'll (currently?) only work on some game engines. Seems to me then, their technique could possibly also make things worse on other engines?
See techreport 690 article: games often use multi frame averaging to calculate the next frame time. So this, combined with delaying only 1 out of 2 frames, should give much smoother performance.
I find the closeness of the two SKUs notable as well.
Nvidia doesn't seem to think the 670 will cut into the 680's numbers, or it doesn't care too much if it does.
I seem to remember you making a promise to improve the frequency of English-written pieces in your respected site , however the situation is becoming worse overtime , I can hardly spot any English article at all , as such your site has dropped out of my radar for quite sometime now , I really hope that will change soon .
I seem to remember you making a promise to improve the frequency of English-written pieces in your respected site , however the situation is becoming worse overtime , I can hardly spot any English article at all , as such your site has dropped out of my radar for quite sometime now , I really hope that will change soon .
No, it doesn't. But it requires some advanced reasoning skills and disabling of knee-jerk reactions to see why this is so.Kaotik said:This also means that every 2nd frame your inputlag goes up a lot, which should be even more irritating than having constant inputlag of x ms
6 out of 7 in stock available for purchase.
Some are complaining that it is not worth it to throw 400 at such a small card (PCB wise). Well, they are right to some extent.
Arbitrary correlation that says nothing. AMD could be quiet for any number of reasons - including that they overestimated demand. How do you know the real reason without numbers to back it up?
CEO level statements at CC,What public information?
Arbitrary correlation that says nothing. AMD could be quiet for any number of reasons - including that they overestimated demand. How do you know the real reason without numbers to back it up?
Ditto. Our reference 670 does the same thing. The EVGA 670 also drops by one bin at 80C. Neither card drops at any other temperature (at least not until you hit the 98C thermal limit), and in fact in our testing the 670 doesn't even reach 80C at stock.One thing I know is that our 680 sample reduced it's Boost from 1124 to 1097 MHz when it's GPU temperature went above 70 °C (we made sure it did...). The 670 sample did go down from 1084 to 1071 (only one notch, mind you, and only above 80 °C).
So if all you had to do to get sufficient supply was to overestimate demand, then why didn't nv do it? BTW, it's interesting that as per your theory, TSMC failed to keep it's end of the deal only with NV, but not with AMD or Qualcomm.
Since the 670 is still widely in stock at the end of launch day, I am surprised that I haven't yet heard Charlie and his fans saying that Kepler demand is low.
RecessionCone said:Since the 670 is still widely in stock at the end of launch day, I am surprised that I haven't yet heard Charlie and his fans saying that Kepler demand is low.
Give it time.
On the Q1 conference call AMD said something like supply is meeting demand, but there's no extra capacity to handle a situation of increased demand. Ideally you'd want extra capacity to be available should you need it.I'm not claiming that TSMC is meeting AMD's projected demand and failing to do so for nVidia. There's absolutely no information on which to base such a claim. What I'm saying is that AMD being quiet is not indicative of anything. They can be quiet because supply is great or because consumer demand is poor. I haven't seen any evidence of particularly high volumes of 28nm AMD parts being sold.