With "at the wall" measuring methods you aren't going to get an absolute value for cards, obviously. But you'll get a more accurate view of how much more power a GPU causes your system to consume versus another GPU, IMO.
The scientific method would be to measure them and determine confidence intervals ... not worry about the mere fact that they are there without numbers. Intuitively I'd say it's not a big deal ... hell anything which puts significantly load on the system is almost certainly going to decrease benchmark results and couple noise to the GPU power measurement as well.Heh, I can't believe you guys are actually trying to argue against the scientific method. There are so many sources for noise/variance in total system consumption measurement that it's not even funny.
Well let's completely ignore PSU efficiency itself which throws off the numbers from the start. The test isn't a measure of how much a GPU causes your system to consume, it's a measure of how much the GPU consumes. If a faster GPU causes the CPU and system buses to work harder, system fans to spin faster etc and consume more power are you saying all that should be included in "GPU power consumption"?
I disagree, but regardless ... you are begging the question.That's my point. Confidence intervals or any sort of statistical conclusion about the dependent variable are a bit less than useless when your samples are smothered in noise.
That's with a big ole dual slot cooler though. I would be very surprised if Nvidia even reacts at all. Weren't GT 240 clocks set to come in just under the 75w limit? Unless they can get voltages down as the process matures I don't see how a clock bump would be imminent. Assuming they care in the first place.
Arguably Xbit does the most accurate consumption testing of all the review sites and they came to a very different result.
http://www.xbitlabs.com/articles/video/display/gf-gt240-1gb_4.html#sect0
NVIDIA’s GeForce GT 240: The Card That Doesn't Matter
For the price of the GT 240 it performs too slowly, and for the performance of the GT 240 it costs too much. We cannot under any circumstances recommend buying a GT 240, there are simply better cards out there for the price.
1. At the wall. If GPU X in the system draws more power power than GPU Y, then it's effectively increased your power useage by that difference.
You haven't actually shown they are smothered in noise (or that the Xbit measurements aren't for that matter).
It does, but it doesn't give me a good idea of the variability of the measurement method in and of itself ... for that you need to run the same system&card with the same benchmark multiple times.Granted, they all may have used different applications to load the GPU but it should give you a good idea of how useful such experiments are.
I don't think it's that bad, in most benchmarks it's so close I'd call it a draw. There are, however, a few (3) benchmarks the 9600GT is noticeably faster (and the opposite basically is never the case). Maybe as a rule of thumb, it will fare better in newer games relatively, if they are relying more on shader power and less on texturing / rops?seems to me that Xbit screwed up on that one, the GT240 is SLOWER in many.. no not many.. most areas than the part it is replacing (http://www.xbitlabs.com/articles/video/display/gf-gt240-1gb_13.html#sect1 , 13 seperate games/benches, 3 diff resolutions) and only really outpaces it's predecessor in HAWX, where DX10.1 plays a part.
Obviously, the GT240 is built for cheap (even saved SLI support!), with no additional power connector (and sold for expensive but that's a different story). Cheap doesn't really translate to good cooling solutions unfortunately, and maybe it's at least not that noisy instead...The one area where the 240 is clearly superior is in power consumption, of course this does not translate into a cooler running part, the GT240 according to Xbit ran hotter than the 9600GT.
But you could say the same about the HD5770 vs. HD4870. It costs more, more transistors, and is slower. Doesn't mean the chip is bad, just they are selling it overpriced compared to last gen (because they can).So, it's one the most part slower, it costs more, 44% more transistors for 40nm retail part,nearly 50% more for the OEM part that is bigger as well and it's declared a "worthy successor" ??!!
No, it doesn't tell you that at all.
Meh, wall measurements have their uses but figuring out power draw of the gpu is not one of them.If the only thing in the system that has changed is the GPU used, the power consumption increase will be due to the GPU modified by the efficiency of your PSU. There's no other possible interpretation.
If the only thing in the system that has changed is the GPU used, the power consumption increase will be due to the GPU modified by the efficiency of your PSU. There's no other possible interpretation.
TDP figures are a max number for the SKU variant, actual results will vary alot (below) the rated TDP. Its pointless taking the difference between two difference variants, even from the same review, because you don't know if you have a high leakage of one variant and a low of the other.
TDP figures are a max number for the SKU variant, actual results will vary alot (below) the rated TDP. Its pointless taking the difference between two difference variants, even from the same review, because you don't know if you have a high leakage of one variant and a low of the other.