Xbit's nonsensical conclusions

trinibwoy

Meh
Legend
Supporter
This has been irking me for a while but is anybody else bothered by the random conclusions Xbit throws out at the bottom of each benchmark graph? This one I found especially unnerving -

Unreal Tournament 2004 is an easier application even than Far Cry or Half-Life 2 , being not very generous for shader-based special effects. The problem with the lower performance ceiling for ATI’s graphics cards can be observed here: all the devices, except for the Radeon X1800 XL, are limited by the CPU. The cards from Nvidia are likewise limited, but the speed ceiling is higher for them for some unclear reason.

http://www.xbitlabs.com/articles/video/display/radeon-x1900xtx_26.html

Maybe this wasn't worth a thread, just wanted to exhale :LOL:
 
Perhaps if you specified what you think is nonsensical, or is it self-evident in your opinion...? :) No, I didn't have the patience of reading the article (yet), xbit reviews tend to be chopped up into 60 pages, or damn near enough anyway... Give me a reason to self-flagellate myself, and I will! :)
 
It seems they reached the correct conclusion, that ATi's drivers have a larger CPU overhead, i e they are less efficient.

This has been true for as long as I can remember.

Btw, my VGA is an X800XL with which I am very happy...
 
Guden Oden said:
Perhaps if you specified what you think is nonsensical, or is it self-evident in your opinion...? :) No, I didn't have the patience of reading the article (yet), xbit reviews tend to be chopped up into 60 pages, or damn near enough anyway... Give me a reason to self-flagellate myself, and I will! :)

Hehe :smile: Pretty much they claim both cards are CPU limited but one is significantly outperforming the other. Kind of contrary to the idea of CPU limitation.

And Cosmo, I doubt that's the case since this doesn't happen in other CPU limited situations - HL2 for example. I think it's just down to the fact that the GTX is much faster in older more texture dependent titles. I just found their conclusion completely illogical.
 
I wonder: IT could be a case of the cards being limited in some stage of vertex processing with the ATI cards having less processing power in aforementioned processing stage, couldn't it? At least theoretically...
 
trinibwoy said:
Hehe :smile: Pretty much they claim both cards are CPU limited but one is significantly outperforming the other. Kind of contrary to the idea of CPU limitation.
Drivers, GPU architecture blah blah blah - will affect where CPU-limitations cut in.

Also, these are averages so it could simply be higher highs or lower lows that create the disparity.

Which reminds me, I expect one of the reaons CrossFire shows limitations in comparison with SLI is that I think its max framerates are capped due to the DVI-bandwidth limit. Well, actually I expect SLI's max framerates are similarly capped, but somewhere higher. Well, that's my suspicion...

And Cosmo, I doubt that's the case since this doesn't happen in other CPU limited situations - HL2 for example. I think it's just down to the fact that the GTX is much faster in older more texture dependent titles. I just found their conclusion completely illogical.
I also find xbitlabs's foot-of-benchmark conclusions sometimes wayward, often with entirely contradictory evidence just above.

Jawed
 
trinibwoy said:
Hehe :smile: Pretty much they claim both cards are CPU limited but one is significantly outperforming the other. Kind of contrary to the idea of CPU limitation.

Not at all. If one driver is more CPU efficient than the other that is exactly what one would expect.
 
Heh, and here I thought Xbit's conclusions were nice in that they at least tried to explain performance differences (not a universal trait, sadly). To be fair, after you post your 30th explanatory blurb, you might be just sloppy or light-headed. :LOL:

trin, you're referring to the counter-intuitive fact that NV cards actually get faster as the resolution increases, while ATI cards remain actually bound? Flipping between the (twice shown) "pure" graph and the (omitted) "eye candy" one, you notice ATI drops ten frames when candified but is otherwise bound at 10x7 and 12x10. The NV cards, however, remain at the same increasing framerates.

My first thought is driver efficiency, which I guess is a roundabout way of saying CPU limitation. Maybe the extra pixels means NV can group more together for added efficiency? Maybe it's also a byproduct of NV's TMUs having a closer relationship to their fragment ALUs than ATI's? Or of NV using brilinear in UT (does it still do that), if that helps more the higher the resolution? Would forcing HQ filtering in NV's drivers or switching to OGL rendering help elucidate anything?

Ultimately, does it even matter when you're getting 100fps at 16x12 4x8 in a game this old? Better Anton save his brain for more pertinent analysis. :)
 
Pete said:
Heh, and here I thought Xbit's conclusions were nice in that they at least tried to explain performance differences (not a universal trait, sadly). To be fair, after you post your 30th explanatory blurb, you might be just sloppy or light-headed.
:LOL:

Well yeah but there's no scientific method to it. Based on which card wins the benchmark they just toss out some random reason like "higher clock", "more pipes", "more efficient shaders", "more bandwidth", "efficient memory controller" or somesuch with nothing to back it up besides the fact that one bar is longer than another. That's what I'm referring to.
 
At one time didn't we have some reason to think that NV was leaning harder on the CPU when it had available resources? That might effect that kind of thing in odd ways depending on how they are implementing it and how smart it is at turning itself on and off as other demands on the CPU ramp up and down.
 
trinibwoy said:
Well yeah but there's no scientific method to it. Based on which card wins the benchmark they just toss out some random reason like "higher clock", "more pipes", "more efficient shaders", "more bandwidth", "efficient memory controller" or somesuch with nothing to back it up besides the fact that one bar is longer than another. That's what I'm referring to.
Ah. I'm assuming there's substance behind their conclusions, but you may be right.
 
trinibwoy said:
Well yeah but there's no scientific method to it. Based on which card wins the benchmark they just toss out some random reason like "higher clock", "more pipes", "more efficient shaders", "more bandwidth", "efficient memory controller" or somesuch with nothing to back it up besides the fact that one bar is longer than another. That's what I'm referring to.
I thought there was a random tech term generator, just for websites. And then the hyperbol word generator 9000, that spouts out "crushed" or "destroys" for a 5fps difference between cards. TH had the "Royal edtition", it would spit out the term " the new king". They all seem to take a PR disk to get the right result to tech term.
 
trinibwoy said:
Well yeah but there's no scientific method to it. Based on which card wins the benchmark they just toss out some random reason like "higher clock", "more pipes", "more efficient shaders", "more bandwidth", "efficient memory controller" or somesuch with nothing to back it up besides the fact that one bar is longer than another. That's what I'm referring to.
To a certain extent you're right, like when they talk about the larger Hi-Z buffer (they only test at 1600x1200, so that's BS), but often they make pretty good conclusions.

For example, bandwidth can be seen with the GTX512's comparison with the regular GTX. The former has 42% more bandwidth but only 28% more core clock speed. If the performance jump is over 30%, the bandwidth is the most likely cause. The delta between the two as you enable AA/AF also gives you information. Similarly, if R580 has a low speed bump over R520, but the GTX512 increases by less than 30% over the GTX, then high texturing (or stencils in Doom3) is likely the culprit.

As far as the statement you quoted in your original post for the thread, I think XBit was bang on. I can definately say that their conclusions and explanations are much better than nearly every other review website. Don't forget that their English doesn't always come out right, either.

The only sites that I think are better are Beyond3D and Digit-Life.
 
This is a comon misconception. CPU bound doesn't mean that all cards will preform the same, it just implies that turning up the resolution doesn't have much effect on the framerate on any given card, while lowering the clockspeed on the CPU will.
 
kyleb said:
This is a comon misconception. CPU bound doesn't mean that all cards will preform the same, it just implies that turning up the resolution doesn't have much effect on the framerate on any given card, while lowering the clockspeed on the CPU will.

Yup, it isn't 100% CPU bounded yet. I think the 7800 has a bit more fillrate and membandwith and UT2004 is pure texture limited with these fast shader capable cards.
 
Jawed said:
Which reminds me, I expect one of the reaons CrossFire shows limitations in comparison with SLI is that I think its max framerates are capped due to the DVI-bandwidth limit. Well, actually I expect SLI's max framerates are similarly capped, but somewhere higher. Well, that's my suspicion...
Well, I don't think so. From what I understand about CrossFire, it's designed to work so that a master card can operate with any other graphics hardware. This would have to mean that it operates on a DAC level. As such, it shouldn't have any effect on obtainable framerates (except that in Supertiling mode, you would not want to run with vsync disabled).
 
Skinner said:
Yup, it isn't 100% CPU bounded yet. I think the 7800 has a bit more fillrate and membandwith and UT2004 is pure texture limited with these fast shader capable cards.
Dude, if it was limited at all by fillrate, memory bandwidth, or texture rate, then changing the resolution would have a significant impact on framerate. It doesn't. So it's most likely limited by the CPU.

Anyway, I know that on my system, UT2004 is one of the few games I play in which I can run at 1600x1200 with 8xS and 16-degree AF at high framerates (7800GT SLI), with all in-game options turned up to the max. I mean, if I can run that high, this game has very, very little fillrate limitation compared to most modern games.
 
Chalnoth said:
Dude, if it was limited at all by fillrate, memory bandwidth, or texture rate, then changing the resolution would have a significant impact on framerate. It doesn't. So it's most likely limited by the CPU.

Exactly.
 
Back
Top