JohnH said:
Chalnoth said:
I don't see why alpha blending need be handled any differently from texture fetches, in terms of latency hiding. That is, I would expect that an architecture could simply use the latency hiding that is used for texture fetches to also hide the latency for alpha blends.
Its the same type of latency hiding technique, but its a different set of buffering so it costs extra area, and no, you can't combine the two as they're required in completly different parts of the pipeline.
This doesn't make sense to me. They're fundamentally the same thing. I mean, you have a pixel input that leads to a pixel output. There may be some complications that arise in accessing the framebuffer as a read/write buffer, but other than that, I don't see why you would need to have another latency-hiding buffer.
I mean, the way I see it, what you should do is this:
1. Request data fetch from external memory.
2. Process other things until that data is available.
3. Process the pixel's final color.
4. Send pixel to output buffer, where it waits until it can be written to memory.
I don't see why you'd want to:
1. Request texture data.
2. Process other things.
3. Process rest of pixel data.
4. Request framebuffer data.
5. Process other things.
6. Process blend.
7. Send pixel to output buffer.
I mean, sure, the actual functional unit that calculates the blend will probably be a specialized unit that can act before (or after, depending) other pixel processing, but I see no reason why one must have a different buffer here.