Fillrate, how much is enough?

While discussing performance of contemporary GPUs, the issue of fillrate often comes up. Honestly I'm completely at a loss here. How important are fillrates at todays levels really?

As a layperson I cannot begin to understand the background, but I can try to analyze performance numbers between different SKUs and from what I can deduct, especially pixel fillrate is nowhere near as important as memory bandwidth and compute power.

Could some knowledgeable person please shed some light on this? Would be greatly appreciated.
 
I am not knowledgeable, but let me try. Fillrate is still critical for performance, but theoretical figures are ever less relevant. Not only because of pixel programs, there are more features altering impact of "pure fillrate". Enough fillrate is when you can lit and texture all the pixels you need.
 
Generally , right now for contemporary games .. A fill rate of about 25Gpixel is quite enough for 1080p , over that say 1440p or more .. not even 35Gpixel would be enough to cover all cases comfortably.
 
You basically can't have enough at high rezzes (over 1080P) and heavy use of transparencies and alpha textures. Nothing sucks more than framerates starting to judder whenever there's a couple trees or a fire in view...
 
Easy answer is: what we have now still isn't enough.

Fairly disgusted that the Titan still isn't 64ROPs, no matter how efficient they are. Moar pixels damnit! I want a 4K 32" monitor this year. Moar Moar Moar!
 
I think the joke is in reference to asking "how important are fillrates" instead of "how important is fillrate". Could be wrong.
 
Okay, my bad :)
But honestly, enough with the joking now. Lets get more specific:

7870 LE vs. 7970 GE:
+11% pixel fillrate
+50% bandwidth
+44% SP GFLOPs
+44% texel fillrate

7970 GE is 37% faster on average in 1600p and up to 50% faster in more modern DX11 games like Sleeping Dogs, Metro 2033 or Sniper Elite v2.
http://www.techpowerup.com/reviews/Club_3D/HD_7870_jokerCard_Tahiti_LE/28.html

So it seems pixel fillrate is quite unimportant even at 1600p, otherwise the performance difference between those cards would be closer to the 11%, right?
 
I doubt that makes sense because tripling resolution also triples the demand for shading power and vastly increases required memory bandwidth and amount with MSAA, as well. And those aren't in excess as my previous example between the two AMD cards show but do very much determine performance.
 
So it seems pixel fillrate is quite unimportant even at 1600p, otherwise the performance difference between those cards would be closer to the 11%, right?

Right but for long time texel fillrate is the one more indicative of performance. Good tests to throw against people that thought Tahiti needed more ROPs.
 
I doubt that makes sense because tripling resolution also triples the demand for shading power and vastly increases required memory bandwidth and amount with MSAA, as well.

I believe there are some steps in the rendering pipeline that consume a lot of fillrate without stressing the shader core that much. If I recall correctly, shadow volume rendering is one such case. I'm not really sure what else could be performance limited by raw fillrate though.
 
I doubt that makes sense because tripling resolution also triples the demand for shading power and vastly increases required memory bandwidth and amount with MSAA, as well. And those aren't in excess as my previous example between the two AMD cards show but do very much determine performance.
I see fillrate when spoken of as a quantity as the real-world figure, what you actually get out of the card (thus dependent on things like available memory bandwidth), not simply the spec sheet figure of ROPs * core clock - which is unrealistic and unreachable.
 
Could some knowledgeable person please shed some light on this? Would be greatly appreciated.

My calculator says, 746,496 MPix/s is enough for 120 fps at 5760 x 1080. So, all we'd need is a chip with this fillrate, that can produce every pixel single-cycle. ;-)
 
Back
Top