Monitors/Frame buffers and wasted bandwidth

HeyJoJo

Newcomer
Just wondering, is there a reason why manufactures dont care about bandwidth effciency when it comes to sending images to video display units or have i got all wrong?

Take 1900 x 1200 32bit per pixel @ 100Hz, the GPU has to send about 870MB/s of data to the VDU. If monitors were to have their own frame buffer and the GPU had a modified pixel buffer were each bit represents a pixel. The GPU could lookup the modified pixel buffer then it only has to read pixel data that are modified and send a "no change code" if not.

GPU output
GPU reads modified pixel buffer. First pixel is modified. GPU reads pixel data from frame buffer and sends data to monitor.
Second pixel not changed, GPU sends FFFFFFFF (no pixel buffer required).

Monitor input
if (valueIN == 0xFFFFFFFF) {dont change buffer value at this address} else {update buffer value at this address with ValueIN}

This is a simple crude example to get my idea across.

This does have some drawbacks, it depends how many pixel change value each frame. If every pixel changes it take you loose a little extra bandwidths because of the pixel modify buffer
 
Why would you care? Power consumption? Not really an issue at the moment (maybe when we have high resolution e-paper displays able to update at 60+ Hz).
 
Why would you care? Power consumption? Not really an issue at the moment (maybe when we have high resolution e-paper displays able to update at 60+ Hz).

Would it not help if your games had more bandwidth to play with instead of wasting it all sending data already sent before?
 
Where is the save in that? To change your nice ordered predictable sequential reads into basically completely random fashion?
Besides how many pixels actually don't change every frame in a game? If a camera moves pretty much all pixels will change, so you have to update entire frame anyway.
 
Would it not help if your games had more bandwidth to play with instead of wasting it all sending data already sent before?
the bandwith you speak of is only between Gfx-Card and monitor, games dont touch it and arent dependend on it.
 
This does have some drawbacks, it depends how many pixel change value each frame. If every pixel changes it take you loose a little extra bandwidths because of the pixel modify buffer
3D applications generally re-render the whole scene every frame. Even if nothing actually changed, the GPU woudn't know unless it compared the current frame with the previous one (which would require twice as much bandwidth as simply reading the current frame).

You could save some bandwidth by using hold-type displays that can handle a variable refresh rate, though. I.e. you only refresh when a new frame is ready. But obviously that needs new displays and extensions to existing transmission protocols.
 
Why's it wasted? What else are you going to do with the bandwidth between the display and video card besides sending images?

Are you proposing compression algorithms that'll add additional image delay?
 
Why's it wasted? What else are you going to do with the bandwidth between the display and video card besides sending images?

Are you proposing compression algorithms that'll add additional image delay?
Well, as long as you're using cables you may not care about the bandwidth (although those that had the first 25x16 monitors and needed dual-link DVI equipped hardware might have felt the pain at a time when it was rarer).

But when we switch to a wireless connection between the computer and the monitor (which is bound to happen: everything is going wireless lately, and I for one won't cry for cable clutter), then his point will certainly become relevant.
 
Sending an image to the screen means you have to read it from memory first.
And compressing it doesn't require reading it from memory?

I can see the point about wireless bandwidth limited situations, but for PC's where the image can't be delayed too much, the compression you can apply is pretty limited.
 
This does have some drawbacks, it depends how many pixel change value each frame. If every pixel changes it take you loose a little extra bandwidths because of the pixel modify buffer
That's the typical case in the majority of 3D applications - which is exactly the time at which having the most bandwidth available is useful.
 
Back
Top