AMD: R8xx Speculation

How soon will Nvidia respond with GT300 to upcoming ATI-RV870 lineup GPUs

  • Within 1 or 2 weeks

    Votes: 1 0.6%
  • Within a month

    Votes: 5 3.2%
  • Within couple months

    Votes: 28 18.1%
  • Very late this year

    Votes: 52 33.5%
  • Not until next year

    Votes: 69 44.5%

  • Total voters
    155
  • Poll closed .
I don't see a problem with an at the wall measurement ... increases in CPU/Northbridge power consumption can be GPU dependent too (lets say the drivers use the CPU for the vertex transforms for instance).
 
Heh, I can't believe you guys are actually trying to argue against the scientific method. There are so many sources for noise/variance in total system consumption measurement that it's not even funny.
 
As MfA noted, if the GPU is offloading calculations to the CPU, which relates to lower overall GPU power consumption at the cost of increased CPU consumption that wouldn't be reflected in measuring the PCIE rails either.

With "at the wall" measuring methods you aren't going to get an absolute value for cards, obviously. But you'll get a more accurate view of how much more power a GPU causes your system to consume versus another GPU, IMO.

Neither system is perfect for all scenarios. But without measuring power draw at the PCIE slot, I'm not sure just how accurately Xbit's current method is. As that could influence power draw by anywhere from 0-75 watts. And even then, wouldn't account for any CPU offloading. Simple example would be video playback with/without GPU assisted accerlation and/or how much of it is accerlated. Emulation of features, etc...

Regards,
SB
 
With "at the wall" measuring methods you aren't going to get an absolute value for cards, obviously. But you'll get a more accurate view of how much more power a GPU causes your system to consume versus another GPU, IMO.

Well let's completely ignore PSU efficiency itself which throws off the numbers from the start. The test isn't a measure of how much a GPU causes your system to consume, it's a measure of how much the GPU consumes. If a faster GPU causes the CPU and system buses to work harder, system fans to spin faster etc and consume more power are you saying all that should be included in "GPU power consumption"?
 
Heh, I can't believe you guys are actually trying to argue against the scientific method. There are so many sources for noise/variance in total system consumption measurement that it's not even funny.
The scientific method would be to measure them and determine confidence intervals ... not worry about the mere fact that they are there without numbers. Intuitively I'd say it's not a big deal ... hell anything which puts significantly load on the system is almost certainly going to decrease benchmark results and couple noise to the GPU power measurement as well.
 
That's my point. Confidence intervals or any sort of statistical conclusion about the dependent variable are a bit less than useless when your samples are smothered in noise. Reviewers measure total system consumption and use that to draw conclusions about GPU power consumption and efficiency. The problem there should be obvious (in comparison with Xbit's approach).
 
Well let's completely ignore PSU efficiency itself which throws off the numbers from the start. The test isn't a measure of how much a GPU causes your system to consume, it's a measure of how much the GPU consumes. If a faster GPU causes the CPU and system buses to work harder, system fans to spin faster etc and consume more power are you saying all that should be included in "GPU power consumption"?

PSU efficiency is only going to magnify useage by a relatively predictable amount and will affect all systems equally, unless one card was to increases power consumption so much that it radically changes the power efficiency.

So, it would be perfectly fair for a GPU vendor to offload as much as it possibly can to the CPU to attain lower GPU power numbers and thus claim their GPU is green without regards to the overall increase in power consumption? Not to say any vendor is doing this, but just an example of why it's important to take note of.

All things (system) being equal... Only change is measuring method.

1. At the wall. If GPU X in the system draws more power power than GPU Y, then it's effectively increased your power useage by that difference.

2. Measure power rails. If GPU X now draws less power than GPU Y, then what does that mean? You're still drawing exactly the same amount of power as in case 1 measurement due entirely to the GPU used.

In case 2, there's no way to know whether the lower power rail measurement is because work is being offloaded to the CPU, or if they aren't measuring power delivered by the PCIE slot, or a combination of the two.

So which method is more accurate when considering the reason behind trying to determine power consumption in the first place?

If you are trying to find a "green" GPU or get a GPU to lower your power consumption method 1 is obviously the best.

If all you care about is how much the GPU consumes regardless of how much it increases your power useage, then I guess method 2 is questionably OK.

Regards,
SB
 
That's my point. Confidence intervals or any sort of statistical conclusion about the dependent variable are a bit less than useless when your samples are smothered in noise.
I disagree, but regardless ... you are begging the question.

You haven't actually shown they are smothered in noise (or that the Xbit measurements aren't for that matter).
 
That's with a big ole dual slot cooler though. I would be very surprised if Nvidia even reacts at all. Weren't GT 240 clocks set to come in just under the 75w limit? Unless they can get voltages down as the process matures I don't see how a clock bump would be imminent. Assuming they care in the first place.

GT240 binning for the desktop was quite horrible (slack).

Looking at the mobile bins they're squeezing the same/better clocks at (perhaps) 40W. Still not too optimistic though- if you have to sell budget cards with mobile requirements.

Most 9600GTs have a PCIe power connector. If they're hellbent on replacing that straightforwardly, then they might as well release a GT2-ohwaitits3now-345. Not that that would be *cost* efficient vs the existing 9800GT Green Edition though.
They won't bother - from an economical point of view. If this was 2008 you bet they would.

More reviews coming tomorrow though.
 
Arguably Xbit does the most accurate consumption testing of all the review sites and they came to a very different result.

http://www.xbitlabs.com/articles/video/display/gf-gt240-1gb_4.html#sect0

seems to me that Xbit screwed up on that one, the GT240 is SLOWER in many.. no not many.. most areas than the part it is replacing (http://www.xbitlabs.com/articles/video/display/gf-gt240-1gb_13.html#sect1 , 13 seperate games/benches, 3 diff resolutions) and only really outpaces it's predecessor in HAWX, where DX10.1 plays a part. The one area where the 240 is clearly superior is in power consumption, of course this does not translate into a cooler running part, the GT240 according to Xbit ran hotter than the 9600GT. So, it's one the most part slower, it costs more, 44% more transistors for 40nm retail part,nearly 50% more for the OEM part that is bigger as well and it's declared a "worthy successor" ??!!

Honestly for someone to believe that .. wow. Talk about a disservice to any reader, and brings into question their objectivity. I'm hardly an Anand fan by any means but IMO their review nailed the GT240:
NVIDIA’s GeForce GT 240: The Card That Doesn't Matter
For the price of the GT 240 it performs too slowly, and for the performance of the GT 240 it costs too much. We cannot under any circumstances recommend buying a GT 240, there are simply better cards out there for the price.

Priced over a 9800GT yet performs on avg with or below a 9600GT, imo that is a failure no matter who makes it otherwise prices need to drop accordingly.
 
1. At the wall. If GPU X in the system draws more power power than GPU Y, then it's effectively increased your power useage by that difference.

No, it doesn't tell you that at all.

You haven't actually shown they are smothered in noise (or that the Xbit measurements aren't for that matter).

consumption.png


Those are load wattage numbers pulled from reviews using total system power consumption. Really useful as an indicator of how much more power a 5870 pulls over a 5850 eh? Granted, they all may have used different applications to load the GPU but it should give you a good idea of how useful such experiments are.
 
Granted, they all may have used different applications to load the GPU but it should give you a good idea of how useful such experiments are.
It does, but it doesn't give me a good idea of the variability of the measurement method in and of itself ... for that you need to run the same system&card with the same benchmark multiple times.
 
Correct, but with any experiment variability will increase with the number of variables, especially if those variables are highly correlated. By isolating their measurements to the GPU only Xbit is inherently reducing noise by eliminating external factors. That's the whole point of their approach.
 
seems to me that Xbit screwed up on that one, the GT240 is SLOWER in many.. no not many.. most areas than the part it is replacing (http://www.xbitlabs.com/articles/video/display/gf-gt240-1gb_13.html#sect1 , 13 seperate games/benches, 3 diff resolutions) and only really outpaces it's predecessor in HAWX, where DX10.1 plays a part.
I don't think it's that bad, in most benchmarks it's so close I'd call it a draw. There are, however, a few (3) benchmarks the 9600GT is noticeably faster (and the opposite basically is never the case). Maybe as a rule of thumb, it will fare better in newer games relatively, if they are relying more on shader power and less on texturing / rops?
The one area where the 240 is clearly superior is in power consumption, of course this does not translate into a cooler running part, the GT240 according to Xbit ran hotter than the 9600GT.
Obviously, the GT240 is built for cheap (even saved SLI support!), with no additional power connector (and sold for expensive but that's a different story). Cheap doesn't really translate to good cooling solutions unfortunately, and maybe it's at least not that noisy instead...
So, it's one the most part slower, it costs more, 44% more transistors for 40nm retail part,nearly 50% more for the OEM part that is bigger as well and it's declared a "worthy successor" ??!!
But you could say the same about the HD5770 vs. HD4870. It costs more, more transistors, and is slower. Doesn't mean the chip is bad, just they are selling it overpriced compared to last gen (because they can).

Don't get me wrong, I'm not really defending the GT240. But I think it's not really that bad a card, it's just overpriced, but I can't see any reason why nvidia couldn't sell it at the same price as the 9600GT. After all it should be cheaper to produce.
That said, the HD5670 means trouble for the GT240... At the same price, it beats the GT240 consistently. More features, lower power draw (maybe - I want to see some real measurements like xbitlabs or ht4u are doing), and on top of that die size is considerably smaller, it should be cheaper to produce, so nvidia probably doesn't want to enter into a prize war to stay competitive.
 
No, it doesn't tell you that at all.

I'm completely flabbergasted as to how you could come to that conclusion.

If the only thing in the system that has changed is the GPU used, the power consumption increase will be due to the GPU modified by the efficiency of your PSU. There's no other possible interpretation.

As for the rest, it's quite obvious the application used to load the system will affect the power use differences, so comparing different sites numbers to each other is a bit of a red herring. Likewise comparing 110-120v to 220v will affect absolute differences. But still wouldn't change the fact that any difference is due purely to the change in GPU.

Regards,
SB
 
If the only thing in the system that has changed is the GPU used, the power consumption increase will be due to the GPU modified by the efficiency of your PSU. There's no other possible interpretation.
Meh, wall measurements have their uses but figuring out power draw of the gpu is not one of them.
How about this scenario:
- you have a slow cpu, which is already working at its maximum and it still isn't really fast enough to feed some gpu. Now you use a faster gpu, the cpu still can't work any faster and consequently there shouldn't be much of a performance difference, and the power difference shouldn't be that big neither (the gpu will not run to its maximum capability, which saves some power, and the cpu should use the same power).
- you've got a fast cpu with enabled power management, which idles half the time feeding some gpu, sometimes entering low power modes. Now with a faster gpu, it'll has to work harder and might not enter its lower power modes anymore. There should be a big difference in performance, as well as in power draw, since not only is the gpu using more power, but add to that the difference in cpu power as well (and this one might very well be dependent if the system has enabled power management, or as some reviewers do, have them disabled - and that's just the obvious, I guess that could also depend on drivers).
 
TDP figures are a max number for the SKU variant, actual results will vary alot (below) the rated TDP. Its pointless taking the difference between two difference variants, even from the same review, because you don't know if you have a high leakage of one variant and a low of the other.
 
If the only thing in the system that has changed is the GPU used, the power consumption increase will be due to the GPU modified by the efficiency of your PSU. There's no other possible interpretation.

That hypothesis is only valid if changing the GPU has no effect on the rest of the system. We know for a fact that is not true. Bottlenecks shift and there are higher/lower loads placed on other system components that also consume power.
 
TDP figures are a max number for the SKU variant, actual results will vary alot (below) the rated TDP. Its pointless taking the difference between two difference variants, even from the same review, because you don't know if you have a high leakage of one variant and a low of the other.

Ah yes, certainly. I'm wondering actually how much of a difference sample variance makes. There are other factors too, I'll just name a few:
- Temperature: depends on case airflow (or open bench changes things too obviously). The higher the temperature, the more power a chip draws (this effect is actually significant). Though I think with reference coolers, it shouldn't be much of a problem, as it tries to keep temperature somewhat constant, but if cooling is non-reference, it could make a difference.
- non-reference pcb: different voltage regulation. Might have different efficiency, as well likely ending up with slightly different voltages.
 
TDP figures are a max number for the SKU variant, actual results will vary alot (below) the rated TDP. Its pointless taking the difference between two difference variants, even from the same review, because you don't know if you have a high leakage of one variant and a low of the other.

Maybe you ought to tell the reviewers that? (Just a suggestion).
 
Back
Top