G70 Vs X1800 Efficiency

saf1 said:
Regarding NV30 vs the 5xx - is it different? How so? Late is late. If you don't believe me, look at the stock and tell me what the investors say, ok? It is late, they made mistakes. My point is that both companies made mistakes in bringing a much hyped card to the market. It does not matter if the card was good or bad, they "BOTH" had problems in bringing a card to market.

That's a pretty damn narrow view to take on the situation. And do you really think the quality of NV30 in direct comparison to competiting parts available on the market at the time of its release had no bearing whatsoever on how that part was received? NV30 was also more late and really something of a dud compared to R300, a corrolation that simply doesn't work in my opinion for the X1800s when compared to the 7800s.
 
Veridian3 said:
As for configurations, the clock rate was noted as 450mhz to keep it simple for the non technical readers out there however care was taken to make sure the 3 internal clocks were as comparible with the R520 as possible. Also we confirmed the pipelines were disabled.
Veridian, what clocks was the G70 running, exactly? 459/490 (pixel/vertex), as Unwinder posted elsewhere, or lower? It probably isn't a big deal, but it probably does affect the 3DM score (at least the vertex tests) more than anything else.
 
John Reynolds said:
That's a pretty damn narrow view to take on the situation. And do you really think the quality of NV30 in direct comparison to competiting parts available on the market at the time of its release had no bearing whatsoever on how that part was received? NV30 was also more late and really something of a dud compared to R300, a corrolation that simply doesn't work in my opinion for the X1800s when compared to the 7800s.

I'm keeping it simple. Both companies had issues I think we could agree. Both impacted how investors viewed the companies. Both had projects run in conjunction with Microsoft and the Xbox.

Not everything needs to be equal to compare or contrast.

Late is late, it does not matter how many days after the fact, now does it?

Also - I honestly do not care about either card. I was just saying that it happened when someone said it wouldn't. We both know the general consumer will never do it, but that is besides the point. It did happen and now people are talking about it. No harm, no foul.
 
Humus said:
I'm not worried about X1800's ability to compete. I'm more worried about the average gamer being able to understand what these results actually mean. When "configured similarly" the G70 may indeed have higher IPC throughput in these tests, but claiming this is a measure of "efficiency" is wrong in my book since they will never be "configured similarly" when you find them in the stores, so it's certainly not something I'd advice anyone to take into account when making purchasing decisions. If anything, this is an IPC test, not an efficiency test.
Fine, and in that case I would agree, perhaps it should not be front page material as we don't want to confuse people who are clueless. The conculsion, well it is kind of bad, in fact I did not even read it at first b/c I realized it meant Jack and shit in regards to which card to buy.

That being said neither Nvidia nor ATI seem to mind confusing the consumer when it is in their interest. Propaganda or misleading names sole purpose is to take advantage of a confused consumer.
 
PatrickL said:
You are voluntarily confusing needs and design choices.

I am?

I think I'm just stating a question in a round about way. Has Nvidia needed to scale a GPU to those speeds to compete? Or, did ATI for that matter? One could argue either point I'm sure.
 
saf1 said:
I am?

I think I'm just stating a question in a round about way. Has Nvidia needed to scale a GPU to those speeds to compete? Or, did ATI for that matter? One could argue either point I'm sure.

Nvidia certainly tried to, but were unsuccessful with NV30. Faster chips are good - I don't know why you are trying to make out that having a faster clocked chip is somehow a bad thing. In case you hadn't noticed, the whole semi-conductor industry has been making it's chips go faster and faster over the last 30 years.
 
saf1 said:
I am?

I think I'm just stating a question in a round about way. Has Nvidia needed to scale a GPU to those speeds to compete? Or, did ATI for that matter? One could argue either point I'm sure.

Basically, what Patrick is saying is that when ATI & NV designed the R520 and G70 respectively, they would have decided the clock speeds they were targetting as part of the design process.

We've no information to indicate that ATI or NV were aiming at either higher or lower clocks than their chips actually achieve so it would be best to assume (IMO) that they both reached their target ranges.

It seems obvious from G70's 24-pipe design that they were not targetting clock speeds as high as ATI's 16-pipe design so we've no real indication that ATI has had to force their clock speeds higher than intended, as seems to be your opinion. If both chips had been released at the same time then perhaps G70 might have been clocked a little higher but then who knows what may or may not have occurred?
 
Thats another thing that most people seem to be overlooking.

R520 has almost 10GB bandwidth advantage and a 625mhz core clock.. Yet most of the time it *barely* outperforms or loses to the 7800GTX.

10GB of bandwidth advantage,,, just to break even. That is somehow a more efficient architecture??? hardly..

Which is why driverheavens efficiency test is valid. It shows what is really going on. The R520 is not even close to an efficient architecture.. its uses extremely high clocks and Gross overkill in bandwidth just to get a 3 FPS lead...
 
I'm sure it's been pointed out elsewhere in this thread, but Brent's photos on HardOCP clearly show a substantial IQ advantage on the 520, particularly WRT filtering. Comparing the GTX and 1800 seems simply fallacious if one cannot achieve equivalent IQ biased toward the higher end. For example, ATI may have an architecture that is actually inefficient without HQ filtering and AA but far more efficient with these on. AFAIK there is no way to actually match eht 520 filtering on the GTX either.
 
Last edited by a moderator:
Hellbinder said:
R520 has almost 10GB bandwidth advantage and a 625mhz core clock.. Yet most of the time it *barely* outperforms or loses to the 7800GTX.

And R520 has an 8 TMU and 8 ROP disadvantage. Point?

10GB of bandwidth advantage,,, just to break even. That is somehow a more efficient architecture??? hardly..

It all depends on what you're rendering.

[R520] uses extremely high clocks and Gross overkill in bandwidth just to get a 3 FPS lead...

In contrast, G7x uses "gross overkill in ROPs and TMUs" just to be handed a defeat.

(In case you haven't noticed, the design goals for the two architectures are different.)
 
Hellbinder said:
R520 has almost 10GB bandwidth advantage and a 625mhz core clock.. Yet most of the time it *barely* outperforms or loses to the 7800GTX.

10GB of bandwidth advantage,,, just to break even. That is somehow a more efficient architecture??? hardly..
If the test is not bandwidth limited, then you won't see any benefit from the extra bandwidth. The extra bandwidth is very useful for things that are bandwidth limited, such as MSAA.
Which is why driverheavens efficiency test is valid. It shows what is really going on. The R520 is not even close to an efficient architecture.. its uses extremely high clocks and Gross overkill in bandwidth just to get a 3 FPS lead...
So it's valid because it shows the R520 in a bad light? Some bias showing here?
 
You have to excuse Hellbinder. He's an uber fanb*y who feels that ATI "owes" it to him to deliver some parts with exceedingly unrealistic expectations. (Because that's what he preaches is coming without thinking or knowing any facts.) And if it's not delievered, it's like a personal affront to him.

Reminds me of certain individuals in the PowerVR camp back in the Neon era...
 
Joe DeFuria said:
And R520 has an 8 TMU and 8 ROP disadvantage. Point?

In contrast, G7x uses "gross overkill in ROPs and TMUs" just to be handed a defeat.

(In case you haven't noticed, the design goals for the two architectures are different.)

How does the G7x have gross overkill in ROPs and TMUs when it has a fillrate very simlar to the R520?
 
CMAN said:
How does the G7x have gross overkill in ROPs and TMUs when it has a fillrate very simlar to the R520?

(Hence my point....going the other way, how is it that R520 has "extremely high" clocks when the absolute fill rates (texture) are similar?)
 
Joe DeFuria said:
(Hence my point....going the other way, how is it that R520 has "extremely high" clocks when the absolute fill rates (texture) are similar?)

I'm with you on that point. I think there is a kneejerk reaction with high frequencies due to the P4 Prescott.
 
Back
Top