Anychance...

here's when I heard of them. sure I don't expect such a display before 2009, and perhaps 2010 for an affordable one.
http://www.digitalversus.com/news_id-5731.html

dual link DVI should have the bandwith for 19x12 at 120Hz, given single link DVI does it at 60Hz. VGA qualifies too.

the follow-up on those 120Hz LCD. an emphasis is on 3D content (stereoscopic). writing might be weirdish (but very correct), it's translated french :).
http://www.digitalversus.com/article-365.html

my opinion is, there's no fundamental technical reasons we haven't had 120Hz LCDs already (or even 85Hz which would allow better gaming). LCD makers didn't want to care, but now that's changing because the content industry is about to sell 3D television.
 
Last edited by a moderator:
I'm not talking about input response, what I meant when I said amd's cpu's seemed more responsive to me, is that it doesn't have as much general desktop bloating.
Fundamentally this is something that you're going to have to show quantitatively (i.e. measure the relative input latency of two systems that are identical as possible, differing only in the CPU). But again, the note is that "all else being equal", the relative input latency difference of two systems will be proportional to their differences in frame times (1/fps). Thus a faster frame-rate means less input latency.
 
Thus a faster frame-rate means less input latency.

Usually correct (and I'm not even agreeing this IQ's ... er... theory) but not necessarily true, though I apologise in advance if this enters into the realm of nitpicking :). If you decouple your draw from your logic updates then while draw hz =< logic hz that is correct, if you have an engine running with draw hz > logic hz then this isn't true; instead: higher frame-rate may mean unchanged input latency.

As a concrete example, ETQW runs the draw at 60hz (by default) while the logic runs at 30hz so half the frames use 1-frame old world status (thus inputs). End result, no matter if you're getting 60fps or more your input responsiveness will be the same as if you only got 30 fps.
 
Usually correct (and I'm not even agreeing this IQ's ... er... theory) but not necessarily true, though I apologise in advance if this enters into the realm of nitpicking :).
Yes, of course :) If you're game isn't even handling the input as fast as you're rendering then several frames will represent the same input data. I was trying to avoid these and other details by my "all other things being equal" comment, but in any case the fundamental point is that there are no reasonable cases in which a game running at a *lower* frame-rate on processor A will be more responsive than the exact same game/code running at a * higher* frame-rate on processor B, although certainly they may have similar input latencies.

This is of course barring weirdness in how an asynchronous input/render loop is handled by an application, blah blah, but I maintain that no reasonable engine should ever see that behaviour.
 
Last edited by a moderator:
I was trying to avoid these and other details by my "all other things being equal" comment, but in any case the fundamental point is that there are no reasonable cases in which a game running at a *lower* frame-rate on processor A will be more responsive than the exact same game/code running at a * higher* frame-rate on processor B, although certainly they may have similar input latencies.

;) *nods*

This is of course barring weirdness in how an asynchronous input/render loop is handled by an application, blah blah, but I maintain that no reasonable engine should ever see that behaviour.

Well, tbh, I can see a benefit in having a higher (and constant) frequency logic than draw cycle so that your responsiveness isn't tied so intimately to framerate. OTOH, if you have such a high discrepancy (say 60hz logic while the game runs at 15fps) there's an argument to be made that any logic smoothness is completely wasted by the obvious poor visual experience.

The reason I posted about this is because Splash Damage has taken some flak, to say the least, because of their decision to decouple of the two cycles, even with JC publicly saying he'd try harder to get them synched up. I suppose that if you can only have synched cycles or xyz features there's a case to be made if synching is a dogma rather than a implementation decision.

Anyway, to get back on track, as you all have said, whatever latency differences between an IMC and FSB there may be, they are infinitely less important than something as simple as logic cycle hz.
 
There are other benefits to fixing your logic timestep. If you're doing something like a physics simulation, you pretty much require a fixed timestep. Variable timesteps mean that precision varies with framerate, and even minor variations can cause an entire simulation to explode.
 
Back
Top