That really isn't the case. Let's say that a single GPU can render a single frame in 50ms. With two GPUs, each can still render a single frame in 50 ms, but, since they work on two frames simultaneously, they complete two frames much faster than a single GPU can.
Now, in a perfect world, a GPU would finish rendering a frame every 25ms, however that isn't usually the case. The main reason is that if you are GPU-limited, then the CPU can send the rendering commands for a single frame in less than 50ms. Let's say the CPU can send the commands in 10ms. Then you end up with something like this:
Code:
Time
0ms game starts
10ms first frame sent
20ms second frame sent
30ms third frame sent
40ms fourth frame sent (eventually the driver will stop sending frames when it gets too far ahead of the GPU)
60ms first frame displayed
70ms second frame displayed
110ms third frame displayed
120ms fourth frame displayed
...
Now what does the application see? It sees that frame times oscillate between 10 and 40ms. That can cause problems for the app's animation timer and cause "micro-stuttering".
-FUDie