Just wondering, is there a reason why manufactures dont care about bandwidth effciency when it comes to sending images to video display units or have i got all wrong?
Take 1900 x 1200 32bit per pixel @ 100Hz, the GPU has to send about 870MB/s of data to the VDU. If monitors were to have their own frame buffer and the GPU had a modified pixel buffer were each bit represents a pixel. The GPU could lookup the modified pixel buffer then it only has to read pixel data that are modified and send a "no change code" if not.
GPU output
GPU reads modified pixel buffer. First pixel is modified. GPU reads pixel data from frame buffer and sends data to monitor.
Second pixel not changed, GPU sends FFFFFFFF (no pixel buffer required).
Monitor input
if (valueIN == 0xFFFFFFFF) {dont change buffer value at this address} else {update buffer value at this address with ValueIN}
This is a simple crude example to get my idea across.
This does have some drawbacks, it depends how many pixel change value each frame. If every pixel changes it take you loose a little extra bandwidths because of the pixel modify buffer
Take 1900 x 1200 32bit per pixel @ 100Hz, the GPU has to send about 870MB/s of data to the VDU. If monitors were to have their own frame buffer and the GPU had a modified pixel buffer were each bit represents a pixel. The GPU could lookup the modified pixel buffer then it only has to read pixel data that are modified and send a "no change code" if not.
GPU output
GPU reads modified pixel buffer. First pixel is modified. GPU reads pixel data from frame buffer and sends data to monitor.
Second pixel not changed, GPU sends FFFFFFFF (no pixel buffer required).
Monitor input
if (valueIN == 0xFFFFFFFF) {dont change buffer value at this address} else {update buffer value at this address with ValueIN}
This is a simple crude example to get my idea across.
This does have some drawbacks, it depends how many pixel change value each frame. If every pixel changes it take you loose a little extra bandwidths because of the pixel modify buffer