"Where do the frames go"...what am I missing?

I agree that if the card is used for offline rendering, then this isn't much of a problem.

If the card is used for adding graphics on top of video in real time, there might be a reason to read the frame over AGP. You could put the video in a texture and do all kinds of blending. But that solution would introduce a delay of more than one frame, and the output would be out of sync compared to the video source. That might be an unwanted effect in a studio.
It could be solved by reading the frame over AGP, and sending it to a genlook card that do the blending with zero delay for the video source. That could also be done by sending the DVI singnal to an external genlook unit, but then you'd loose the alpha channel, and with that also loose the ability to antialias gfx<->video edges.

But that doesn't bother me much.


Another application that I do want to work is image filtering.
I want to send an image to the gfxcard, do a lot of filtering on it, and send it back to the CPU for the final analysis. Some image operations can't be done even with fully dynamic loops and branches in PS. And I want the result to be available to the CPU, it's not interesting to send it to the screen.

So I wonder if there's any chance that we get fast frame readbacks on R9700. (And cards coming later.)
I'm aware of the massive pipelineing that is done. But this should be possible without stalling the pipe. Just do the readbacks in a pipelined way. Invent a glDelayedReadPixels, that tells where to put the frame. Then keep on doing stuff, like maybe rendering the next frame. And finaly do a glReadDelayedPixels to get the frame you wanted.
 
Back
Top