G70 Vs X1800 Efficiency

btw.. lets not even get into the disaster that is the X1600...

Who's the genious that thought up 12 pipelines and 4 ROPs??? Less that 3 Gpix Fill rate???

8 yeah thats doable.. but 4 on a midrange mainstream card? in 2005??
 
Hellbinder said:
btw.. lets not even get into the disaster that is the X1600...

Who's the genious that thought up 12 pipelines and 4 ROPs??? Less that 3 Gpix Fill rate???

8 yeah thats doable.. but 4 on a midrange mainstream card? in 2005??
I think you're overdoing it with your reactions a tiny bit - steering the thread off-topic doesn't help either.
 
Joe DeFuria said:
Well then you're "with me" period, because that point is the entire failing of Hellbinders "argument."

I was reinforcing the idea that neither is absurd. I wasn't trying to be against you.

Edit: Obviously, I cannot spell.
 
Last edited by a moderator:
CMAN said:
I was reinforcing the idea that neither is obsurd. I wasn't trying to be against you.

OK...it just seemed like you didn't "get" my comment. ;) (I was illustrating absurditiy with absurdity to show the point.)
 
Bouncing Zabaglione Bros. said:
Nvidia certainly tried to, but were unsuccessful with NV30. Faster chips are good - I don't know why you are trying to make out that having a faster clocked chip is somehow a bad thing. In case you hadn't noticed, the whole semi-conductor industry has been making it's chips go faster and faster over the last 30 years.

I'm not nor did I say that.

I also know the semi-conductor industry has been trying to go faster. However, there are always more ways to skin a cat. NOTE TO PETA - no cats where harmed with this post...faster does not always mean it is more efficient let alone better. Look at IBM's power pc, Motorolla 68xx series vs the 8x86 and the Sun Ultra sparc vs the PA-Risc.
 
Mariner said:
Basically, what Patrick is saying is that when ATI & NV designed the R520 and G70 respectively, they would have decided the clock speeds they were targetting as part of the design process.

We've no information to indicate that ATI or NV were aiming at either higher or lower clocks than their chips actually achieve so it would be best to assume (IMO) that they both reached their target ranges.

It seems obvious from G70's 24-pipe design that they were not targetting clock speeds as high as ATI's 16-pipe design so we've no real indication that ATI has had to force their clock speeds higher than intended, as seems to be your opinion. If both chips had been released at the same time then perhaps G70 might have been clocked a little higher but then who knows what may or may not have occurred?

I think we are more or less saying the same thing. I know for sure I do not know what either company had for goals. I do see a lot of simalarities though between both companies - in my eyes anyway. Both worked on Microsoft projects, Both had SEC issues and lawsuits, and both had trouble at one time or another releasing a new GPU.
 
OpenGL guy said:
If the test is not bandwidth limited, then you won't see any benefit from the extra bandwidth. The extra bandwidth is very useful for things that are bandwidth limited, such as MSAA.

So it's valid because it shows the R520 in a bad light? Some bias showing here?

How would you feel about a test comparing the G70 and R520 at stock clocks on the core, but the same memory throughput?

Nite_Hawk
 
IMO the only "gross overkill" on the XT is in power consumption, and we don't know how much of that is due to the extra, faster RAM. You wouldn't be complaining if the 7800 packed twice the RAM and 10GB/s more bandwidth. Edit: well, core temp is probably similarly overkill, if R520s are running much hotter than G70s. No doubt the XT is, anyway; the XL, given its power draw, may not be.

The extra draw at load will hopefully be corrected with dynamic clocks.

Actually, can we infer how much extra power the RAM requires from the difference in desktop power draw b/w the XT and XL? For instance, the XL system draws 161/225W idle/load, and the XT, 192/274. Xbit shows the XT itself draws 110W loaded, while the XL, just 60W. How much of the 30W idle difference can be attributed to the GPU, and how much to RAM, considering their stock voltages (XL: 1.08/1.88V GPU/RAM, XT: 1.31/2.07V GPU/RAM)?
 
Last edited by a moderator:
Pete said:
Actually, can we infer how much extra power the RAM requires from the difference in desktop power draw b/w the XT and XL? For instance, the XL system draws 161/225W idle/load, and the XT, 192/274. Xbit shows the XT itself draws 110W loaded, while the XL, just 60W. How much of the 30W idle difference can be attributed to the GPU, and how much to RAM, considering their stock voltages (XL: 1.08/1.88V GPU/RAM, XT: 1.31/2.07V GPU/RAM)?
We should be careful here. We need to make sure the XL and XT are the same revision so we are not comparing an XL chip with the "soft ground" problem to an XT without.
 
Nite_Hawk said:
How would you feel about a test comparing the G70 and R520 at stock clocks on the core, but the same memory throughput?
There are some problems here too. For example, faster memory tends to have higher penalties for certain operations, but burst speed is good so it's an overall win (you hope). By downclocking one card's memory, you might still be paying the price for the penalties when the penalties should be reduced. Similarly, if you overclock a card's memory, you are now running the RAM out of spec and are giving the card an advantage since you aren't paying the higher price for the penalties.

I think it's best to start with cards that are already clocked near where you want to test.
 
OpenGL guy said:
There are some problems here too. For example, faster memory tends to have higher penalties for certain operations, but burst speed is good so it's an overall win (you hope). By downclocking one card's memory, you might still be paying the price for the penalties when the penalties should be reduced. Similarly, if you overclock a card's memory, you are now running the RAM out of spec and are giving the card an advantage since you aren't paying the higher price for the penalties.

I think it's best to start with cards that are already clocked near where you want to test.

But the purpose of this whole excercise should not be to determine which is "better" rather just to investigate aspects of different architectures.
 
I for one think that this was a fun exercise in testing, but it probably should have been worded a bit differently. As it is, I agree that many will come away thinking "ATI sucks" when it is not the case. The whole exercise though brings up new questions, and I think that is a good thing.

The first thing that should come up in anyone's mind is how much of a clockspeed gain will NVIDIA get with 90 nm? Well, as you all know, 110 nm does not utilize low-K, and that has a significant effect on overall clockspeeds (10% to 15% over FSG). So, not only will NV be going down to 90 nm, but they will be producing their first low-K parts (ok, they are already producing 6100/6150... but let's not muddy the waters too much). So if minimal changes were made to G70, and they recompiled the design using the 90 nm data libraries, what kind of clock improvement will we see? I would guess that 550 MHz would be pretty standard for such a design, and the faster speed bin products would hit around 600 MHz. I would of course be very curious as to how the power draw and heat production for such a part would be increased over a lower clocked 110 nm product. I think it would be VERY interesting to see how the differences in design between NV and ATI would affect power and heat. If NVIDIA can produce a 24 pipe design running at 600 Mhz at the faster bin, all the while creating as much heat as a 7800 GTX and eating as much power, than ATI is going to have a hard sell to OEM's on their hands.

So, that was basically one line of thought that popped up after the article. There are others. But I don't think that we can stress enough that each design is unique, and each one accepts different tradeoffs to achieve their results. I think ATI's product has a stronger feature set than NV, but NV has a more proven track record with the 7800 GTX due to its availability and power features.

Still, good work on the article, it was very enjoyable.
 
Sxotty said:
But the purpose of this whole excercise should not be to determine which is "better" rather just to investigate aspects of different architectures.

And how are you getting that information if your results are tainted by issues such as memory latency?
 
OpenGL guy said:
There are some problems here too. For example, faster memory tends to have higher penalties for certain operations, but burst speed is good so it's an overall win (you hope). By downclocking one card's memory, you might still be paying the price for the penalties when the penalties should be reduced. Similarly, if you overclock a card's memory, you are now running the RAM out of spec and are giving the card an advantage since you aren't paying the higher price for the penalties.

I think it's best to start with cards that are already clocked near where you want to test.

When you say penalties, do you mean different CAS/RAS/etc latencies or something more subtle? It would be useful if ATI would be willing to publish information like this so that we could make more informed comparisons... :cool:

Nite_Hawk
 
Last edited by a moderator:
Nite_Hawk said:
When you say penalties, do you mean different CAS/RAS/etc latencies or something more subtle? It would be useful if ATI would be willing to publish information like this so that we could make more informed comparisons... :cool:

Nite_Hawk

Well if he's just talking about ram latencies, samsung publishes those numbers. Here
 
Last edited by a moderator:
OpenGL guy said:
Exactly what I am referring to.

So why didnt you just come out and say that the G70 would be running tighter timings than the XT with both clocked to 1200Mhz? All this fancy schmancy talk is sometimes just unnecessary! :rolleyes: :LOL:
 
AlphaWolf said:
Well if he's just talking about ram latencies, samsung publishes those numbers. Here

Yeah, I found that link right before you edited your post. :) It looks like it is recommended to set 600MHz GDDR3 to CAS 9 and CAS 11 for 750MHz memory. Having said that how important is memory latency when dealing with videocards? It seems that a lot of the data (textures, etc) are quite large and hopefully somewhat sequential?

Well, atleast the 7800gt and the x1800xl seem to have similar memory configurations...

Nite_Hawk
 
Back
Top