AMD: Volcanic Islands R1100/1200 (8***/9*** series) Speculation/ Rumour Thread

And cherrypicking results from different sources then mixing them into a single conclusion is not accurate at all either. Computerbase shows the uber 290x consuming 100W more than 780Ti , hardware.fr has it at 50W more.

I tend to trust techpowerup for power consumption numbers. They're the only site I know of that measures card power directly. All other sites measure total system consumption.
 
Anandtech's results can be thrown out as they compare vs the 290X quiet mode btw. Edit on seconds thoughts I'm unsure if this will be in future reviews or also counts for the conclusion in this one.
It is what Nvidia recommended to reviewers and Anandtech blindly just eats it up. Atleast there are reviewers like Kyle [H] who have the guts to go against such tactics.
 
I tend to trust techpowerup for power consumption numbers. They're the only site I know of that measures card power directly. All other sites measure total system consumption.

Hardware.fr only measures the card's power.

That said, system power consumption is not irrelevant. Sure, it sort of obfuscates the exact differences between the cards themselves, but (i) this is what you actually pay for, and (ii) if a certain card has an impact on CPU power consumption because of higher/lower driver load, that's good to know.
 
Why should Anandtechs results be thrown out? Quiet mode is the default mode, it is AMDs choice. Are we trying to make unfair comparisons again?

Did they throw out Titan's compute numbers or are they still using them?

59703.png


You can't get those Titan numbers without flicking a switch in the drivers, which certainly isn't shipping spec. If it's the act of physically changing the switch that is the issue they can always just up the fan speed in CCC instead.
 
Last edited by a moderator:
Please, this has no bearing on power/noise targets. No one would buy a $1000 card to run DP GPGPU tasks and then waste it by not flipping a fucking checkbox, that's insane.

Do you want benches to run at vsync on if that's the default, too? Even if a checkbox fixes that.
 
Did they throw out Titan's compute numbers or are they still using them?

You can't get those Titan numbers without flicking a switch in the drivers, which certainly isn't shipping spec. If it's the act of physically changing the switch that is the issue they can always just up the fan speed in CCC instead.

How is that relevant to the topic at hand in any way? Are you fundamentally incapable of understanding that the tolerance for {A, N} Defence Forces is exhausted and that this bullshit will be stopped one way or the other? We do not care about your holy crusade for the honour of ATI, stop littering tech threads with this junk. If you want to help Wavey, go send a CV to AMD. Note that this holds for all other thinly veiled crusaders (of which there are a few, for both sides).
 
http://anandtech.com/show/7481/the-amd-radeon-r9-290-review/17

On a final note, with the launch of the 290 and AMD’s promotional efforts we can’t help but feel that AMD is trying to play both sides of the performance/noise argument by shipping the card a high performance configuration, and using its adjustability to simultaneously justify its noise as something that can be mitigated. This is technically correct (ed: the best kind of correct), but it misses the point that most users are going to install a video card and use it as it's configured out of the box.
Based on this AT has decided that the 290X will be evaluated at 40% fan speed because that's what "out of the box" is.

"out of the box" for Titan is single precision mode with boost and higher clocks. In order to get the DP numbers you need to flick a switch in the drivers, disabling boost and lowering clocks.

What's the fucking difference between flicking a switch and flicking a switch? What you call "junk" I say shows the clear difference in AT's thought train when it comes to each company - for the past 8 months Anandtech has been showing DP results for Titan - which is not "out of the box" performance, yet now it's a convenient excuse to run the 290X's numbers at 40% because they can't move a fan speed slider in CCC?

Also, what happened to "having more information is good"? Is it really so hard to show both the 290X's numbers? If anything a lot of people would be very interested in that - just to see how their favourite games run in "quiet mode". Edit - of course that's all we'll get now, no uber mode numbers.

There is no excuse for this, none whatsoever. Not one other tech site is doing this, just Anandtech.
 
Last edited by a moderator:
Perhaps now we will be able to discuss things in a more interesting way, with less noise. I will take this opportunity to note that there are two other names on a list of candidates for early-outs, one wearing green underwear and one wearing red - I am sure they can figure themselves out, and that they will improve the SNR.
 
Ok, so have I simply overlooked FCAT results with the 290 series in Crossfire, now that they're not using the CF bridge?

Edit: Nevermind, I found it. Not a comprehensive review, but it looks like most DX11 titles are doing better. Skyrim is b0rked still though...
 
Last edited by a moderator:
Hardware.fr only measures the card's power.

That said, system power consumption is not irrelevant. Sure, it sort of obfuscates the exact differences between the cards themselves, but (i) this is what you actually pay for, and (ii) if a certain card has an impact on CPU power consumption because of higher/lower driver load, that's good to know.

Yeah hardware.fr does it too. Forgot about them.

System power consumption is indicative of the reviewer's system. It's not going to be very representative of whatever (psu, cpu, ram, hdd) you're running.

It shows the impact of the GPU on the rest of the system but the implications are blurry. Is higher system consumption good because you're less GPU limited? Higher consumption is supposed to be bad :)
 
It shows the impact of the GPU on the rest of the system but the implications are blurry. Is higher system consumption good because you're less GPU limited? Higher consumption is supposed to be bad :)

Certainly, a higher performing GPU may very well increase system power consumption because it allows the CPU (and other architectural components) to pull more of their own weight. An i7-4770k has a lot of idle time if it's feeding an GeForce 630 at 2560x1440 at high details, not so much if it's feeding a Titan.
 
Yeah hardware.fr does it too. Forgot about them.

System power consumption is indicative of the reviewer's system. It's not going to be very representative of whatever (psu, cpu, ram, hdd) you're running.

It shows the impact of the GPU on the rest of the system but the implications are blurry. Is higher system consumption good because you're less GPU limited? Higher consumption is supposed to be bad :)

I would just look at performance and power consumption, without worrying about being GPU-limited or not.

There may be an argument, however, that testing CPU load as a function of GPU model is useful to determine how big a CPU you need to purchase to properly feed your graphics card. But I'm not sure there would be much impact there (at a given performance level, that is).

In other words, a 780 Ti will obviously require a bigger CPU than a GTX 640 to avoid CPU bottlenecks, but while picking a 780 Ti or a 290X may have some impact on CPU power draw, I don't think it's going to make you hit CPU bottlenecks on a decent quad core. This is just a guess, though.
 
Anandtech's numbers show the 290X uber mode to be faster than 780ti

nv-amd.png


Grabbed copies and will mirror them just in case
 
Anandtech's numbers show the 290X uber mode to be faster than 780ti
By a stunning geomean of 1%... (aside: across a very strange data-set of non-equally weighted games due to varying numbers of configurations, fairly useless fraps windowed minimum frame rates in the same data set, etc).

There's honestly not much useful that comes out of aggregating the results for cards that are basically the same performance. Just choose based on what games you play. Or choose based on secondary factors (cost, noise, etc). Or if you don't care about any of that, just flip a coin!

You're welcome, internet. Now can we move on? :p

Regarding boost/powertune/turbo, while it is definitely a can of worms, it's ultimately unavoidable. As these cards are increasingly power-limited, you enter the space where you can't turn the whole chip on at once. If you design your chip to run at the same clocks in firmark as a game you're going to be leaving a lot of useful performance on the floor.

This is no different than the situation on CPUs for the last couple years, particularly on ultra-mobile (15W, etc. and down). Game developers are going to have to start dealing with this on GPUs too and targeting power-efficient algorithms over filling all idle processing resources to get optimal results. And yeah, moving to Alaska and gaming outside might be required for "maximum" performance :p
 
By a stunning geomean of 1%... (aside: across a very strange data-set of non-equally weighted games due to varying numbers of configurations, fairly useless fraps windowed minimum frame rates in the same data set, etc).
I copied down the data in the above spreadsheet into my own spreadsheet and weighted the benchmarks so that these three conditions were satisfied:
  1. All benchmarks in a single game have a total weight of 1
  2. All benchmarks of the same resolution in a single game total to the same weight (in this case 0.5 since the only resolutions are 3840x2160 and 2560x1440)
  3. All benchmarks of the same resolution in a single game have the same weight value
Then the weighted arithmetic means of the ratios are actually the opposite of the unweighted means: AMD/NVIDIA = 0.992, NVIDIA/AMD = 1.020; as well as the weighted geometric mean: AMD/NVIDIA = 0.986, NVIDIA/AMD = 1.014.

And for what it's worth, comparing just the 3840x2160 benchmarks gives weighted arithmetic means of AMD/NVIDIA = 1.034 and NVIDIA/AMD = 0.977, and comparing just the 2560x1440 benchmarks gives weighted arithmetic means of AMD/NVIDIA = 0.951 and NVIDIA/AMD = 1.064 (the weighted geometric means are similar).

(I can give a picture of my spreadsheet upon request.)
 
Last edited by a moderator:
Cool, so basically a wash still. Good demonstration of how these aggregate values are more just a function of choice of benchmarks when cards are neck in neck than anything else. I imagine with such a small sample of games you could include/exclude a single game to swing AMD or NVIDIA to a win.
 
I copied down the data in the above spreadsheet into my own spreadsheet and weighted the benchmarks so that these three conditions were satisfied:
  1. All benchmarks in a single game have a total weight of 1
  2. All benchmarks of the same resolution in a single game total to the same weight (in this case 0.5 since the only resolutions are 3840x2160 and 2560x1440)
  3. All benchmarks of the same resolution in a single game have the same weight value
Then the weighted arithmetic means of the ratios are actually the opposite of the unweighted means: AMD/NVIDIA = 0.992, NVIDIA/AMD = 1.020; as well as the weighted geometric mean: AMD/NVIDIA = 0.986, NVIDIA/AMD = 1.014.

And for what it's worth, comparing just the 3840x2160 benchmarks gives weighted arithmetic means of AMD/NVIDIA = 1.034 and NVIDIA/AMD = 0.977, and comparing just the 2560x1440 benchmarks gives weighted arithmetic means of AMD/NVIDIA = 0.951 and NVIDIA/AMD = 1.064 (the weighted geometric means are similar).

(I can give a picture of my spreadsheet upon request.)

I would personally rather have the 6.4% win for 2560*1440. Rather than the 3.4% win at 3840*2160.

2560*1440 makes up 1 percent of the monitors at steam hardware. I expect 4k to be maybe a thousandth of that as it's 10 times the cost or doesn't include imacs.

I don't think too many people are using seiki 4K tv's as gaming monitors due to their 30hz limitation plus general usage problems for daily computing.

To give 4k equal weight to 2560*1440 is ridiculous due to it's rarity and impracticality due to cost. Heck, even 1920*1080 is more relevant once we turn up the settings and raise AA high AA high enough.

Anandtech is doing AMD a huge favor by testing 4k and leaving out 1080p. The vast vast majority of people gaming on these cards are going to be using 1080 considering it is the most common resolution.
 
Last edited by a moderator:
2560*1440 makes up 1 percent of the monitors at steam hardware. I expect 4k to be maybe a thousandth of that as it's 10 times the cost or doesn't include imacs.
1. 4K is probably a decent analogue for multi-monitor gaming resolutions in terms of performance.

2. The price of UHD monitors is decreasing.

2. There is likely a strong correlation between those who buy >=US$550 graphics cards and those who have multi-monitor or UHD gaming display setups.
 
Back
Top