Basic said:
1.
The only culling that can be done is post vertex shader processing, and will thus not save geometry calculation. (If a higher level API was used, it could be done better, but that won't happen.)
Not necessarily. You could, for example, transform the plane that divides the top from the bottom portion of the screen into world space, and determine with a simple dot product whether or not each vertex is above or below this plane. The only problem with this is that if you are geometry limited, that's quite a bit of processing to do. It's much more efficient to do it at a higher level where the comparison can be done via bounding boxes.
Basic said:
2.
AFR will increase framerate, but not decrease latency. So while it might look better for a spectator, it won't help the feeling while playing.
This is only partially true. You are only adding one extra frame of latency, but there are already a number of frames of latency in normal rendering anyway (about 2-3 in the driver, and 1-2 due to double or triple buffering). So it will still be a benefit even in the case where without SLI the framerate would be low. From this perspective, total latency will be reduced with AFR.
No, the real problem isn't this, but rather syncing rendering between the cards. If you remember framerate graphs of the ATI Rage Fury MAXX, for example, with double buffering enabled it tended to sort of "ring," that is, every other frame was slow. That sort of effect would be very noticeable. Hopefully nVidia enables triple buffering by default with AFR enabled.
Another problem with AFR is, of course, that you don't get the memory size savings of splitting the framebuffer between the two video cards. This will, in some games, make it a bit harder to run at the higher resolution/FSAA setting that you would expect SLI to get you.