Interpreting 3DMark fillrate figures

arjan de lumens said:
I would guess at state changes. When performing a 'state change' (swapping to another texture, another blend mode, another framebuffer or whatever), the GPU may be unable to draw any pixels for a short period. When reducing the resolution, the amount of time needed for state changes in unchanged, but the number of pixels drawn between each state change is greatly reduced, causing the percentage of cycles lost to state changes to increase.

State changes really shouldn'tr show up in this sort of benchmark unless there are a LOT of them.

I'd guess it's because 640x480 isn't an exact multiple of tile sizes, or there is a higher percentage of fractional tiles on the 640x480 screen. This would mak make memory access slightly less efficient on the 640x480 fill than on the larger screen.
 
Slightly lower fillrate at lower resolutions make sense, because the framerate is much higher. Hence more draw calls, more swapping between front- and back-buffers, more CPU overhead, more of DirectX and drivers doing whatever they do behind the scenes, etc.

5GPix / 64 layers at 640x480 equals 250 fps, but only 60fps at 1280x1024. Less than half a millisecond per frame would be responsible for Gouhan's results.
 
Back
Top