G70 Vs X1800 Efficiency

AlphaWolf said:
I see a few people saying they think it's interesting, perhaps you could help me understand why it's interesting to you?
Well it is quite simple. If they had done it correctly, which is still in doubt, but lets assume that they actually were able to makes things as fair as possible in memory bandwidth and so forth.

This does have the potential to highlight different factors which influence each IHVs decisions and how they have panned out. For example you could check to see (well in the future) how a high clock affects the SM3 capabilities in branching, such as does it actually hurt the r520 in regards to missing. In other words will it become less of an issue at a lower speed or more of an issue. Assuming anything actually becomes more efficient at a lower speed and something else at a higher speed then it would allow insight into tradeoffs that the various parties have made. Yes it would be interesting to just have the x1800 and how it varies, but I also find it interesting to have both there at once. In any case I am sure someone much more eloquent than I can express why they find it interesting.
 
Yes, I'm sure that all of those xl customers who had already planned to run their shiny new $500 graphics card at a clock rate 30% slower than standard will be mightily upset by these results...;) Jeepers, while we're at it, why don't we take a G70 and clock it to 166MHz and see how well it stacks up against a Voodoo 5? Wow, that sounds neato, too!

And while we're at it, let's take the software nVidia runs slowest at normal clocks in comparison to the competition and see how much slower the G70 will run that software at 166MHz?

Yes, this kind of "let's overclock a G70 by 5% and underclock an XL by 30% and see what happens" testing is just oh-so-interesting, I scarcely know where to begin discussing it....

[not.]

The real equation here of interest to me is that while you can drastically underclock an XL to match slightly overclocked G70 speeds, there's no way in hell to slightly overclock the XL and then bring up the G70 clock to match it, is there? Gaw lee...:D Whadd'ay'a think about dem apples?...:D

I thought I had seen it all in the old days, you know, when poor old 3dfx's V3 was run at normal speeds against drastically overclocked nVidia TNT2's--but now we've run the gamut again by overclocking nV's products and drastically underclocking ATi's just so we can see how much slower than standard we can make the ATi product run--in order to say something "profound" about it all. Unbelievable.

It's unfortunate that the G70 faces such a competitive clock-rate differential, but gee, let's not start fantasizing that the ATi products are clocked from the factory as slow as nV's products are clocked, just to help poor ol' nV out, again. Please? Heh...;)

Edit: I also want to add this: if we're going to use software in an attempt to cripple both products in some fashion, like clockspeed for the ATi product and operational pipes for the nV product, then excuse me if I say I don't see the point. I mean, it seems to me that taking a sledge hammer to both chips and then declaring that neither works as advertised would be just as expositional.
 
Last edited by a moderator:
Humus said:
Perhaps the option in RivaTuner doesn't work so it's actually still running with all pipes?

But I agree with OpenGL guy, this is not a particularly useful test. If we'd clock down a P4 to Athlon level, and find they run extremely slow in comparison, does that mean the P4 architecture is vastly inferior to the Athlon?

No the option in rivatuner works fine. Any fillrate tester will show you this.
 
WaltC said:
Jeepers, while we're at it, why don't we take a G70 and clock it to 166MHz and see how well it stacks up against a Voodoo 5? Wow, that sounds neato, too!
...
It's unfortunate that the G70 faces such a competitive clock-rate differential, but gee, let's not start fantasizing that the ATi products are clocked from the factory as slow as nV's products are clocked, just to help poor ol' nV out, again. Please? Heh...;)
For the first part yeah that would be interesting. If someone could compare a vast array of cards, even if it required shutting off all but two pipes or whatever yes it would be interesting to me.

For the second part you are blindingly partisan. I never understood why anyone would become such a fanatic in regards to any piece of hardware or a company, but perhaps you can enlighten me what exactly causes this? Do you have relatives that work there? Stock options? Just want a team to cheer for? Seriously it is almost a religious zeal and I do not understand why anyone would have such a passion for a particular company.
 
Sxotty said:
Well it is quite simple. If they had done it correctly, which is still in doubt, but lets assume that they actually were able to makes things as fair as possible in memory bandwidth and so forth.

This does have the potential to highlight different factors which influence each IHVs decisions and how they have panned out. For example you could check to see (well in the future) how a high clock affects the SM3 capabilities in branching, such as does it actually hurt the r520 in regards to missing. In other words will it become less of an issue at a lower speed or more of an issue. Assuming anything actually becomes more efficient at a lower speed and something else at a higher speed then it would allow insight into tradeoffs that the various parties have made. Yes it would be interesting to just have the x1800 and how it varies, but I also find it interesting to have both there at once. In any case I am sure someone much more eloquent than I can express why they find it interesting.

Even if it had been done right, I don't see how this comparison would do that better than testing one product against itself.
 
The DH article is interesting, its just not done completely right, IMO.

I'd be much more informative if DH would go on to explain why this is, show a few more tests at wider settings etc. Instead of a quick test, I personally want to know why something is the way it is, just telling me that's the way it is, is useless.
 
Sxotty said:
For the first part yeah that would be interesting. If someone could compare a vast array of cards, even if it required shutting off all but two pipes or whatever yes it would be interesting to me.

For the second part you are blindingly partisan. I never understood why anyone would become such a fanatic in regards to any piece of hardware or a company, but perhaps you can enlighten me what exactly causes this? Do you have relatives that work there? Stock options? Just want a team to cheer for? Seriously it is almost a religious zeal and I do not understand why anyone would have such a passion for a particular company.

Obviously, what you say interests you doesn't interest me, as I've stated. I prefer to use my hardware and software as intended, and with 3d hardware that means with the appropriate number of pipes running and at the standard clockspeeds, etc. Basically, if I wanted fewer pipes and/or a slower clockspeed I'd buy the appropriate product set up that way from the factory, and save myself a bundle in the process. Seems like perfectly good sense to me. Deliberately crippling products by means of software techniques never intended for those products by the manufacturers seems like a great way to expound on a whole lot of nothing. Sorry that you find my opinion here "blindingly partisan" because of course it isn't: it's just blindingly sensible.

I mean, have you ever stopped to think about what else you might be doing to these chips and their environments by crippling them in this way--aside from what it is you think you're doing to them? It isn't certain from your comments that you have. I have thought about it, is all I'm saying.
 
ninelven said:
And I would have thought those who didn't find it interesting would not have bothered posting in this thread...

That's the beauty of threads in forums...they're often so educational. Learn something new every day, maybe...;)
 
Walt is has nothing to do with your post here, it has to do with your post history one simply has to read a few of your posts to grasp it. I am honestly curious as to why you have such an affection for ATI at least in comparison to Nvidia. I simply do not understand what such a thing stems from. That is why I postulated a few ideas. It is actually rare on these boards to get such a devout disciple of any hardware manufacturer. At least in the form of someone who can string together a compete sentence. (In other words you seem sensible enough ;) )
 
WaltC said:
Obviously, what you say interests you doesn't interest me, as I've stated. I prefer to use my hardware and software as intended, and with 3d hardware that means with the appropriate number of pipes running and at the standard clockspeeds, etc. Basically, if I wanted fewer pipes and/or a slower clockspeed I'd buy the appropriate product set up that way from the factory, and save myself a bundle in the process.
Maybe, just maybe those kinds of comparisons aren't made to influence your personal purchasing decision (or mine for that matter)?

I find them interesting, because those kinds of games can shed light on the architecture itself (you know, the whole point of Beyond3D), highlight different tradeoffs made, and can be an indidactor of cost and difficult for IHVs.
 
ninelven said:
And I would have thought those who didn't find it interesting would not have bothered posting in this thread...

How were they supposed to know it wouldnt be useful, if they didnt read the article? Common sense...
 
Bob said:
Maybe, just maybe those kinds of comparisons aren't made to influence your personal purchasing decision (or mine for that matter)?

I find them interesting, because those kinds of games can shed light on the architecture itself (you know, the whole point of Beyond3D), highlight different tradeoffs made, and can be an indidactor of cost and difficult for IHVs.

I wasn't aware that this article had been run on B3d...wasn't it run elsewhere? You seem to be saying that while some opinions on this kind of thing should be discussed in a B3d forum, dissenting opinions such as mine should not be. Is that what you're saying? Pro and con is the hallmark of serious discussion, isn't it? How can you you have one without the other and think you've had a worthwhile discussion?

My point is simply that when you use software techniques not intended by the manufacturers to cripple the operational aspects of chips--any chips--you may be doing more than you think you are doing to adversely affect the operation of one or both of them. If so, then you aren't doing what you think you are doing when you test them under those conditions--and if so then there's hardly any point in doing it, is there? The conclusions you may draw from such tests may therefore be in error, is what I'm saying.
 
When you run any test you might screw it up and therefore we should run no tests... or not

Walt what exactly do you think the problem was with these specific tests that make them invalid/unreliable? What do you take issue with? I see little discussion actually going on with regards to the methodology. Only a tiny snippet about the %change in bandwith and fillrate. Other than that it seems a bunch of folks saying "this is unfair I don't like it"

There is no real competition here, at least there should not be, it is just supposed to be investigative. Perhaps if you could enlighten us as to the failings of this particular test it would go a lot farther towards convincing those interested than simply saying it is not valid to justify which is "better" based on the test. While a few have argued that point, most do not seem to be doing so.
 
Sxotty said:
Walt is has nothing to do with your post here, it has to do with your post history one simply has to read a few of your posts to grasp it. I am honestly curious as to why you have such an affection for ATI at least in comparison to Nvidia. I simply do not understand what such a thing stems from. That is why I postulated a few ideas. It is actually rare on these boards to get such a devout disciple of any hardware manufacturer. At least in the form of someone who can string together a compete sentence. (In other words you seem sensible enough ;) )

If you can dig up my posts circa 2002-2003 on B3d, you'll develop a healthy appreciation for what it is about nV I don't like and what it is about ATi that I like. It's all found in the contrast between the public profiles and statements of the two companies and, of course, in their products.

I really think using the word "disciple" is far too melodramatic...;) I'm just someone with a very clear idea of what he likes and why. I'm all for prejudice, by the way, as long as it is justifiable. What I have a rather low tolerance for, though, is prejudice which is based on fiction, unsupported extrapolation, or baseless hyperbole. That includes things like overclocking one IHV's card when comparing it to another IHV's card which is not overclocked, and then pretending that the result is entirely acceptable. That sort of crass attempt at misinformation has always infuriated me from the first time I ever saw it, when some "reviewers" were using their sites to compare overclocked TNT2's to standard-clocked V3's...;) To this day I see no point in such exercises aside from common propaganda.
 
That is a fine reason to prefer ATI (and one should I guess), I just don't hold a grudge well I guess...

I really doubt driver heaven was trying to make propagand against ATI, that would be at least a bit out of character. This is not supposed to be a buyers guide really, it was just a bit of investigation by a party lucky enough to have both cards.

I agree with you 100% though if it were a review and one card is overclocked simply to make it look better. I do not particularly like that fact that so many of the 7800gt/gtx are factory overclocked for precisely that reason, it just clouds the issue on a review. Should a reviewer use a BFG OC card for example? Not IMO unless they also have a stock card. At the same time I understand the desire to, because extrapolating if the machines have changed can be difficult. I.E. "well a bfg oc 7800 got 3% faster on this test with a 4% slower CPU, so I think it might be on par with the x1800xl which is now 6% faster" That sort of thing can be confusing.

p.s. devout disciple was alliteration and who can resist it, even if it is overdramatic.
 
OpenGL guy said:
When R5xx was designed, how could it have been designed to be more efficient than a product that didn't exist (i.e. the G70)?

Ummm. I am not sure how to respond to this. How was the NV40/G70 designed when nothing with that efficiency existed?
I thought it was clear that the R520 was designed to be more efficient than previous ATI parts.
Sure, and it also seems that the R520 was designed to clock higher. I think we can assuem designs will become more efficient and clock higher, therefore studying one of the two, like efficiency, is interesting.
Edit: Note that I am not saying that R520 is more or less efficient than G70, I am merely stating that when it was designed ATI could not have targetted G70 efficiency.
I am going to give you the benefit of the doubt here and assume you are saying this and don't find the DH experiment interesting because you have seen far more comprehensive analysis behind the closed doors of ATi. However, us peons don't have this privilege and therefore have to rely on studies such as the one discussed here.

I am not sure why you are so specific about targetting G70 efficiency and why you think this is something ATi would do. Surely ATi would take their own ideas, pit efficiency and frequency against each other, amongst other things, and solve for their optimum, regardless of what Nvidia is doing.

I think you are right that pitting R520 and R480 clock for clock would be interesting, but it would also be uninteresting because one is an SM 2.0 part while the other is SM 3.0. Pitting the 6800 GT/Ultra v a 16 PP 7800 GTX would also be interesting. Perhaps it is all just interesting because...well, that's what Beyond3D is all about and I think DemoCoder's comment about the 550MHz RSX is pertinent as this part should be G70 based, 90nm, and evidently clocks high even in a less-than-cooling-friendly system like the Playstation 3 (remains to be seen though).
 
Last edited by a moderator:
Sxotty said:
When you run any test you might screw it up and therefore we should run no tests... or not

Walt what exactly do you think the problem was with these specific tests that make them invalid/unreliable? What do you take issue with?

Well, for the third time (at least)...when you use software to disable pipelines in a chip designed to run with all pipelines operational, or you use software to lower the clock below that which the manufacturer intends for that particular chip, just what is it that you think you are going to find out except that such chips run slower than their manufacturers intend? How much slower is really the issue, isn't it?

To put it another way, where is the utility or value for a purchaser of such products in such tests? If it's common to see people buying $500 3d cards just to clock them slower than stock, or just to run them with fewer pixel pipes open than intended, I have to say I have never seen it. Generally the tendencies are the reverse--to open up more than the allowed pipes and to clock the gpus higher than at stock.

And last, since the software used to accomplish such ends isn't software which the manufacturers of these products recommend be used with these products in normal operation, then it cannot be assumed that closing off pixel pipes and running down the clock are the only things happening to these chips under these conditions. It is only assumed that other than clock rate and the number of pixel pipes running nothing else in the environments for these chips is being adversely affected--it is assumed and not proven and that is the problem I have with such testing. Wouldn't it be better to simply test a 16-pipe G7x chip or a 450MHz R520 card natively, as their manufacturers intend them to be tested, assuming both can be found?

I'm concerned that non-standard operational conditions for both chips may well create non-standard results for both chips. It isn't a difficult point.
 
wireframe said:
I think you are right that pitting R520 and R480 clock for clock would be interesting, but it would also be uninteresting because one is an SM 2.0 part while the other is SM 3.0.
This is exactly why it would be interesting! You could then see if FP32 cost anything in shader efficiency. The two parts share a lot of common features, why would you focus on the things they can't both do? For example, since R480 can't do PS 3.0, why not compare PS 2.0 performance? Of course there are a lot of other things that could be measured.
 
OpenGL guy said:
This is exactly why it would be interesting! You could then see if FP32 cost anything in shader efficiency. The two parts share a lot of common features, why would you focus on the things they can't both do? For example, since R480 can't do PS 3.0, why not compare PS 2.0 performance? Of course there are a lot of other things that could be measured.

Ok, I agree, but the most interesting difference, dynamic branching, is also the main efficiency enhancer. It is my understanding that you could design a part to be better at this particular aspect without necessarily improving on the "old" static performance. That is to say, you could have a SM 3.0 part doing dynamic branching circles around a SM 2.0 part but lose in simple SM 2.0 performance; something like hidden surface removal efficiency but with shaders.

Note that I agree because I find most things interesting and this is why I initially wondered how you could be so fast to conclude something like this to be uninteresting.

What I find really interesting is how ATi and Nvidia seem to have two diverging approaches regarding efficiency and frequency, but end up at the same place. Sorta.
 
Back
Top