G70 Vs X1800 Efficiency

russo121 said:
To me, this test is very interesting because it can tell you if you should buy R520 or wait for G70 90nm - essencially if r580 and G70 have the same number of pipes and clocks (please don't tell me Nvidia will not achieve R5xx clocks because we don't know a thing - I don't know).

This is exactly why these tests (more specificially, the article's "conclusions" based on the tests) are dangerous. The tests are NOT interesting for that reason. You should NOT wait for G70 90nm...because

a) We have no idea what G70 90nm is, when it's coming, what nVidia's design goals are for that part
b) We have no idea what ATI will have on the market to counter G70 90nm.

(To be clear...I'm not saying you should go out and buy an XL or a GTX now...I'm saying these tests don't say squat about whether you should get an XL now or "wait for G70 90nm).
 
3dcgi said:
Memeory latency is important. Graphics chips are designed to hide memory latency, but it can't always be hidden. Caches are sized to hide this latency and increasing latency from what is expected can increase the penalty. On top of this the ideal cache size can be a difficult thing to determine until a lot of performance modeling or real world testing has been done.

Do you think that the difference between cas 9 and cas 11 memory clocked at the same speed would be enough to significantly tip the scales toward the video card with the cas 9 memory? It seems that if the card with cas 11 memory has been designed to hide that latency as much as possible that it may play a small role, but not terribly significant. I seem to remember that back in the old days going from cas 3 to cas 2 yielded something like a 10% increase... Going from CAS 11 to CAS 9 on a board that is designed to hide CAS 11 latency seems like it might yield rather smaller benefits.

Nite_Hawk
 
Mariner said:
As NV will be using exactly the same TSMC process as ATI, why should their chip have lower power draw? I do realise chip that design also affects power draw but I think we'll have to wait and see how "G7X" fares rather than making assumptions.
I don't think its 100% given that NVIDIA will automatically use TSMC for 90nm desktop parts, while that may be the most obvious choice they may use others. NVIDIA gravitated away from IBM partially because they don't have the intermediate process nodes (i.e. 110nm being a shrink from 130nm), but should their customer lines be onsream with 90nm then they may choose them. Along with UMC NVIDIA are also dealing with Charter as well - we just don't know what will pop out of what fab yet.

Another factor could be utilisation of the fab capacity - 90nm is a volume process node for TSMC, but with ATI already having three or four 90nm product lines coming out of TMSC and Microsoft gobbling up as many Xenos parent chips as they can for probably a fair number of months, could the capacity have some say in where NVIDIA are going?
 
Do the Fabs only get paid on the basis of wafer and process size? In other words, financially, do they give a rats @ss whether it is RV515 or R520 on that wafer at 90nm?

There would be, of course, some PR egoboo reasons to care for them --sort of "the halo effect" for Fabs.

The reason I ask, is that we know (because Orton said so) that one reason ATI are going to UMC for some 90nm is capacity for lower-end parts. I'm just wondering if the Fab is really in a business model where they can say to high-profile customers "so sorry, first in, first out". . .or whether instead they might say to another customer, "Sorry, but we have to reserve some capacity over here for an important customer with a high-end part that is very important to them and us."
 
Nite_Hawk said:
Do you think that the difference between cas 9 and cas 11 memory clocked at the same speed would be enough to significantly tip the scales toward the video card with the cas 9 memory? It seems that if the card with cas 11 memory has been designed to hide that latency as much as possible that it may play a small role, but not terribly significant. I seem to remember that back in the old days going from cas 3 to cas 2 yielded something like a 10% increase... Going from CAS 11 to CAS 9 on a board that is designed to hide CAS 11 latency seems like it might yield rather smaller benefits.
That may be fine, for normal operation. But you may start getting texture cache misses galore if you downclock the memory relative to the core.
 
chavvdarrr said:
afaik, 1800XL has twice the transistior count as X800, yet has ~ same power draw
Why should we expect less from NV?

The XL is a fairly hefty down clock and down voltage from the target design frequency which allows fairly substantial power savings. This down clock and down voltage is possible because of the agressive frequency design.

Unless Nvidia is willing to do a lot of circuit/optimization work, and then throw it all away for their performance part, I wouldn't expect said scaling. A better point to compare to is the 800xt or 850xt to the 1800xt.
 
And yet the GeForce 7800 GTX is both much larger and has lower power consumption than the previous high-end parts, from nVidia or ATI. nVidia did apparently do a lot of optimization work with the G70, as it relates to power consumption.
 
Chalnoth said:
And yet the GeForce 7800 GTX is both much larger and has lower power consumption than the previous high-end parts, from nVidia or ATI. nVidia did apparently do a lot of optimization work with the G70, as it relates to power consumption.

7800GTX uses more power than a 6800Ultra according to xbit.
 
Just a question - can't be R520 hotter (than G70) because UTDP utilizes ALUs much more compared to G70 architecture?
 
no-X said:
Just a question - can't be R520 hotter (than G70) because UTDP utilizes ALUs much more compared to G70 architecture?
Clockspeeds are a more likely culprit (both memory and core). The R520 isn't that much more efficient for current games.
 
no-X said:
Just a question - can't be R520 hotter (than G70) because UTDP utilizes ALUs much more compared to G70 architecture?


who said its hotter, people with ATI tool are reporting 20C temp variations between that and CCC's reading, i think there might be a temp bug.

Other then that the R520 core seems about right considering its clock speeds.
 
SugarCoat said:
who said its hotter, people with ATI tool are reporting 20C temp variations between that and CCC's reading, i think there might be a temp bug.
More power consumption = more heat generated. The chip is hotter, in other words.

Granted, some of the heat can be mitigated through better cooling, but that doesn't change how much heat is generated in the core.
 
Chalnoth said:
More power consumption = more heat generated. The chip is hotter, in other words.

Granted, some of the heat can be mitigated through better cooling, but that doesn't change how much heat is generated in the core.

People seem to be forgetting about the ram when talking power consumption. The assumption seems to be that the r520 chip itself is drawing a lot more power, except that the x1800xl draws quite a bit less than the GTX. Is 125mhz on the core really adding 50watts of usage, I don't think so.
 
AlphaWolf said:
People seem to be forgetting about the ram when talking power consumption. The assumption seems to be that the r520 chip itself is drawing a lot more power, except that the x1800xl draws quite a bit less than the GTX. Is 125mhz on the core really adding 50watts of usage, I don't think so.

Well, I think we'll get a pretty good answer to that one in a couple more weeks!
 
AlphaWolf said:
People seem to be forgetting about the ram when talking power consumption. The assumption seems to be that the r520 chip itself is drawing a lot more power, except that the x1800xl draws quite a bit less than the GTX. Is 125mhz on the core really adding 50watts of usage, I don't think so.
The memory is a possibility, but don't count out the core causing a large portion of that, either. The higher the clockspeeds are, the more quickly power consumption ramps up.
 
AlphaWolf said:
People seem to be forgetting about the ram when talking power consumption. The assumption seems to be that the r520 chip itself is drawing a lot more power, except that the x1800xl draws quite a bit less than the GTX. Is 125mhz on the core really adding 50watts of usage, I don't think so.

It looks like the XL still draws more than the 7800GT though...

Nite_Hawk
 
Continuing from aaron's comments, do we know that the 7800GTX is pushing process clock limits as high as the X1800XT?
 
Back
Top