Multi-GPU in AFR and inputlag

Lukfi

Regular
Do multi-GPU systems working in AFR mode have an input lag? How serious it is for a person that plays multiplayer first-person shooters on a semi-professional level, or is it unnoticeable? (Therefore, is a dual-GPU setup a good choice for a PC that will be used primarily for playing Call of Duty 4?) Thanks in advance for your insights.
 
Yes, there's one additional frame of input lag compared to a single chip. Now, depending on how you use your dual setup this could be everything from 0 to 1 frame of lag. If you use your dual setup to render at higher framerate, say 100 fps vs. 50 fps for a single chip, then the lag will be identical, assuming of course you have perfect scaling so you get double the performance. If you instead increase resolution so your framerate stays the same, there will be a lag.

Now to be fair, it's not a one frame lag versus no lag. There is always lag. Direct3D allows the CPU to be up to three frames ahead of the GPU. Even if the game reduces the lag to just one frame, there's also lag in the on average half frame from your input until the application reads its message queue, and then the time to render that frame, and then the time to transfer it to monitor until you see the change. Say you're rendering at 60fps and have 60Hz on your monitor, then you'd get on average 2.5 frame or about 40ms until the change has been fully updated on your screen. So instead of thinking it's 1 vs. 0 lag, think of it more like 3.5 vs. 2.5, or depending on how you see it perhaps 3 vs. 2.
 
Hmm, thanks. You said Direct3D *allows* the CPU to be ahead of the GPU, could it be tweaked somehow to reduce the lag as much as possible?
 
I think you can change it with a registry key, but a quick google didn't turn anything up.

However, a game can itself choose to sync to fewer frames. Some games, FPSes in particular, include a "reduce mouse lag" feature that does this.
 
NVIDIA's latest drivers allow editing the "maximum pre-rendered frames" directly in the control panel. The default is 3.
 
In ATT there is a Flip Queue Size option that apparently does the same, but is set to zero by default (don't know about CCC though).
 
Since the CPU is not being allowed to get as far ahead of the GPU, the input lag is reduced. Rendering performance may also be reduced in cases where the GPU runs out of work to do.
 
Since the CPU is not being allowed to get as far ahead of the GPU, the input lag is reduced. Rendering performance may also be reduced in cases where the GPU runs out of work to do.

So what restraint is placed upon the CPU which causes it to "not get as far ahead of the GPU"? Is it simply not issuing commands via the driver beyond 3 frames ahead of the current frame?
 
So what restraint is placed upon the CPU which causes it to "not get as far ahead of the GPU"? Is it simply not issuing commands via the driver beyond 3 frames ahead of the current frame?
Exactly. If the driver detects that it is at the "frames ahead" limit then it waits for the GPU to consume some of the data before submitting more.
 
Back
Top