It's not just bandwidth with 4870 vs 4850. Bandwidth is useless if you don't have fillrate backing it up. Do you understand generator and carrier concept?
It's not useless, because the 4870 is faster per clock than the 4850 (i.e. it is more than 20% faster).
If bandwidth is the key why did 2900xt fail? Why isn't GTX280 not twice as fast G92 with twice the bandwidth?
Who said bandwidth is key? Not me.
You can't just point and say it's bandwidth is limited by 30%. That's absurd.
Did you even read this thread?
I'm not just pointing and saying. I have a model, I fit data to it, and I have a very low standard error.
I'm not saying "it's bandwidth is limited by 30%", whatever the heck that means. I'm saying ~30% of a typical frame's rendering time (in HL2:E2, ET:QW, and F.E.A.R.) is consumed by operations that are BW limited on the 4850.
That part of the workload will be completed faster with more bandwidth. The other parts will be completed quicker with a faster GPU.
I wasn't talking about pixel fillrate rather texel fillrate.
Then use the right terminology. Pixels fill polygons, but texels don't fill anything. You should have said texturing/sampling/fetch rate. You even used the word blending, which implies pixel rate.
Anyway, you are still wrong in suggesting games are texturing limited. The number of texture samples required for a frame is proportional to screen pixel count (for the most part). Framerate slows down much slower than that when you change resolutions.
What are you really trying to say here? Some percentage higher resolution vertex low resolution? G92 better than G80...
No, I'm educating you on two points:
- Increasing resolution doesn't increase the bandwidth usage per pixel. Contrary to myth, GPUs do not become more bandwidth limited at higher resolution. The exception is when polygon speed is a factor, because then higher resolution increases the percentage of time spent limited on pixel processing. AA, however,
does increase BW usage per pixel, so I agree with you there.
- For some rendering tasks, the 23% advantage of the 8800 GTX in BW/ROPs is important. In others, the 17-29% advantage of G92 in everything else is important. A typical frame has a mixture of both types of rendering tasks.
(sorry about the 28% figure in my previous post, as I used the wrong G80 memory clock.)
G80 and G92 have much more things in common than you think it does. There really isn't a vertex difference what have you because the SP hasn't changed.
You forgot about clock speed. See above.
Since you can't measure percentage differences I don't know why you keep mentioning this percentage limitation.
Again, read the thread. The whole point is that with the 4850 and 4870 - which differ only in BW/clock - we
can measure it using multiple regression.