Which DX11 video card meets the best price/performance/power efficiency?

Which DX11 video card meets the best price/performance/power efficiency?

  • AMD Radeon 6970

    Votes: 1 1.1%
  • AMD Radeon 6950

    Votes: 34 37.4%
  • AMD Radeon 6870

    Votes: 9 9.9%
  • AMD Radeon 6850

    Votes: 23 25.3%
  • ATI Radeon 5970

    Votes: 0 0.0%
  • ATI Radeon 5870

    Votes: 2 2.2%
  • ATI Radeon 5850

    Votes: 1 1.1%
  • ATI Radeon 5830

    Votes: 1 1.1%
  • ATI Radeon 5770

    Votes: 2 2.2%
  • ATI Radeon 5750

    Votes: 0 0.0%
  • Nvidia GeForce GTX580

    Votes: 1 1.1%
  • Nvidia GeForce GTX570

    Votes: 8 8.8%
  • Nvidia GeForce GTX480

    Votes: 0 0.0%
  • Nvidia GeForce GTX470

    Votes: 0 0.0%
  • Nvidia GeForce GTX 460 1GB

    Votes: 8 8.8%
  • Nvidia GeForce GTX 460 768MB

    Votes: 0 0.0%
  • Nvidia GeForce GTS 450

    Votes: 0 0.0%
  • Other

    Votes: 1 1.1%

  • Total voters
    91
*Checks that FS review*

Jesus christ that is Tom-esce levels of fail. They don't list their full system, for one thing.

Also, much of the power difference can be explained by the fact that they're using Intel's top of the line CPU, which has six cores. The CPU is also massively overclocked.
 
*Checks that FS review*

Jesus christ that is Tom-esce levels of fail. They don't list their full system, for one thing.

Also, much of the power difference can be explained by the fact that they're using Intel's top of the line CPU, which has six cores. The CPU is also massively overclocked.

But for some reason it ONLY effects Nvidia with massive power draw, ATI does NOT suffer in this configuration.
 
But for some reason it ONLY effects Nvidia with massive power draw, ATI does NOT suffer in this configuration.

*checks*

Correct on the load power measurement, but on the idle, it's much higher than TR no matter which company they're using.

http://www.techreport.com/articles.x/20088/12 TR's Radeon HD 6970/6950 review. The power consumption page. Their test set up can be found on page five.

FS also does not list what they were testing exactly when they got that measurement.

I suspect it's Furmark, given the insanely high numbers. Few if any apps other than Furmark can get that high. Given that basically nothing gets that high, Furmark power consumption numbers cannot be relied upon for either AMD or NVIDIA. Find a few reviews that tests games or benchmarks that are commonly used, and that'll give you the best idea of power consumption.

Edit: To best point out how the massively OCed and six core CPU effects things, compare the no card idle result on FS's review to the GTX 580 SLI result in the link I gave above. It will enlighten things greatly.
 
Indeed, so 6950 is really power efficient given that +1GB=30W.
IMG0030351.gif

Clearly ATI have capped Furmark so it should not be relied upon.

30W? Where do you get that figure?
 
People are looking at power differences between HD 5870 1GB and 2GB. These use 16 1gbit chips instead of 8 1gbit chips, unlike the HD69xx which use 8 2gbit chips, so that's really comparing apples and oranges.
Too bad hynix hasn't released the datasheet for the 2gbit parts, so we could finally put that rumor about 2GB using (much) more power to rest. For all I know, it could use less power if it's made on a newer process...
 
Indeed, so 6950 is really power efficient given that +1GB=30W.
Edit: hotlinking disabled, click to see.
http://www.hardware.fr/articles/813-4/dossier-amd-radeon-hd-6970-6950-seules-crossfire-x.html
Clearly ATI have capped Furmark so it should not be relied upon.

Nvidia have been capping Furmark since at least the GTX 2xx series and ATI starting with the 4xxx (app detection) series.

Nvidia, with GTX 580, their hardware power monitoring and containment wasn't sufficient and they added app detection to cope with runaway power.

ATI with 5xxx put in hardware monitoring and containment similar to what Nvidia had with GTX 2xx and then further evolved that with more elegant power containment algorithms and monitoring with 6xxx in addition to allowing end users to adjust the aggressiveness of the power containment.

Furmark should never ever be used as a benchmark of power useage. It's ONLY benefit is to see how well various companies power monitoring and containment solutions are at dealing with runaway power situations. But that benefit is completely gone if you have to rely on app detection (GTX 580) in order to prevent run-away power use.

Regards,
SB
 
I do appreciate for the votes and it helped making right decision for upgrading video card, I bought Radeon 6950 :)

The only small problem I have - running Radeon 6950 on Intel Quad Q9650 OC @ 3.6GHz is still to slow, plus PCI-Express X16 version.1.1 and NOT 2.0 since I still have Asus P5K-Deluxe motherboard.
 
^Too slow for? I don't think PCI-E v1.1 make much difference in frame rates. Did you try unlocking your 6950 to -70? It may give you enough performance gains to be happy (given that your chip can be successfully unlocked).
 
strange thread
the card that has the best price/performance/power efficiency is a fact surely, not a choice ?

Well they don't necessarily need to be the same though and then you would need to weight the performance and power efficiency. It could be that a cheap one had cruddy power characteristics, or one that had superior power characteristics might end up some dog slow performer.
 
My current card is HIS Radeon 4850 (Iceqube4) small over-clock to 700Mhz GPU and default memory 1100MHz GDDR3.

I had no need to upgrade since 4850 does mostly everything for me, but now I want tessellation DX11.

The HD 4850 has Tessellation. :p
 
Here is custom settings benchmark test that is CPU bound, minimum frame rate is too low here.
http://img132.imageshack.us/img132/7890/systemrz.png

Does anyone know why GPU clock is 500MHz at idle ??

(I use single 24 inch DVI monitor)

Huh, within a minute of reading your post I stumbled on this.

Checked thru the settings and it MIGHT be the following selection affecting these stuck clocks. Go to CCC -> Desktops & Displays, click triangle and select Configure. The first tab Attributes has the selection of scaling and DVI settings. I checked the "Reduce DVI frequency on high-resolution displays" and BAM..clock went up to 500/1375. De-selected and rebooted = back to normal..Can this be?

http://www.rage3d.com/board/showthread.php?t=33972379
 
Back
Top