AMD: Volcanic Islands R1100/1200 (8***/9*** series) Speculation/ Rumour Thread

I wonder how will reviewers approach benchmarking when Mantle hits the light of day and IF it brings substantial performance advantage to AMD. I hope we get all the numbers in a single chart and not some separate chart at the end of the review added as an afterthought.
 
Anandtech is doing AMD a huge favor by testing 4k and leaving out 1080p. The vast vast majority of people gaming on these cards are going to be using 1080 considering it is the most common resolution.

Yes I completely agree here. Despite these GPU's being the most suitable options for ultra high resolutions I'll bet the vast, vast majority of people that use them will still be gaming at 1080p and it's there that the Ti extends it's lead over the 290x.

4K benchmarks if anything should be left to a separate chart at the end given that they're of little more than academic interest (rather than practical use) at present.

Heck, with the mass of "next gen" games incoming even 1080p is going to be a struggle for these GPU's in some games at max settings while maintaining 60fps.
 
Forget about playing at UHD using these chips. They are still very weak for it, not to mention 4K displays are still extremely expensive and not affordable for the vast majority of people...

4K will be played properly when these displays reach prices well below 1000 $ and perhaps it will be the time of Radeon R9 490X
 
There is likely a strong correlation between those who buy >=US$550 graphics cards and those who have multi-monitor or UHD gaming display setups.

There's probably a strong correlation the other way around. However I would bet the vast majority of people buying flagship GPUs are running a single monitor. Multi-monitor is very niche and the $500-$600 GPU segment existed long before that became a fad.
 
Heck, with the mass of "next gen" games incoming even 1080p is going to be a struggle for these GPU's in some games at max settings while maintaining 60fps.
Exactly, the rush for 1080p IMO is premature, especially on next-gen.. but also for PC. Even for the top single GPUs, running 1080p@60fps is almost impossible with demanding games. And it seems every game nowadays becomes demanding when you add those PC specific features.. TressFX , DOF, HBAO, Global Lighting , Deferred MSAA ,TXAA , Tessellation .. etc.

Most games weren't even ready for 1080p , scaling them to that level helped reveal their lackluster assets .. to this day only a handful of games hold up under the scrutiny of 1080p.

This also significantly increased memory requirement, we went from 256MB at the beginning of the generation to 2.0/3.0GB at the end. we needed 8/12 times the memory for some games! The stupid COD Ghosts consumes 3GB already for no freaking reason !
 
Yes I completely agree here. Despite these GPU's being the most suitable options for ultra high resolutions I'll bet the vast, vast majority of people that use them will still be gaming at 1080p and it's there that the Ti extends it's lead over the 290x.

4K benchmarks if anything should be left to a separate chart at the end given that they're of little more than academic interest (rather than practical use) at present.

Heck, with the mass of "next gen" games incoming even 1080p is going to be a struggle for these GPU's in some games at max settings while maintaining 60fps.


People with a R9 290 in current games should be playing with 4k downsampled to 1080p, or at least super-sampling AA at 1080p, which should give the Hawaii chips the same advantage as playing with high resolutions.
 
People with a R9 290 in current games should be playing with 4k downsampled to 1080p, or at least super-sampling AA at 1080p, which should give the Hawaii chips the same advantage as playing with high resolutions.

Is that an option in all games (through drivers)? Genuine question.
 
You're asking why people running multi-monitor setups are likely to also buy expensive graphics cards? Ummm because they're necessary to run games adequately at those resolutions? :)

But that's exactly what kalelovil said and you replied that the correlation was probably the other way around. Or am I drunk or something?
 
But that's exactly what kalelovil said and you replied that the correlation was probably the other way around. Or am I drunk or something?

Kaleovil said that high end GPU correlates to multi monitor/high res, but trinibwoy argued NOT the opposite, but that high end GPU owners don't necessarily game at very high resolutions, but that it's more likely that high resolution gaming setups are run with high end GPU's.

So other way is not the same as opposite in this case. Basically a high end GPU owner can easily have only one 1080p display, but almost all 4K or 3 display gamers have a very high end gaming setup.
 
Kaleovil said that high end GPU correlates to multi monitor/high res, but trinibwoy argued NOT the opposite, but that high end GPU owners don't necessarily game at very high resolutions, but that it's more likely that high resolution gaming setups are run with high end GPU's.

So other way is not the same as opposite in this case. Basically a high end GPU owner can easily have only one 1080p display, but almost all 4K or 3 display gamers have a very high end gaming setup.

Aaahh yes, that makes more sense, thanks. And I agree that the causality is probably high definition monitors => big GPUs.
 
Regarding boost/powertune/turbo, while it is definitely a can of worms, it's ultimately unavoidable. As these cards are increasingly power-limited, you enter the space where you can't turn the whole chip on at once. If you design your chip to run at the same clocks in firmark as a game you're going to be leaving a lot of useful performance on the floor.

This is no different than the situation on CPUs for the last couple years, particularly on ultra-mobile (15W, etc. and down).
I'd move the time frame to say that GPUs have passed that point generations ago, likely before Furmark was even a thing.

The notable thing was just how exceptionally primitive they were shown to be, with hacky driver black lists and cards killing themselves on demanding applications (or more recently, StarCraft 2's menu screen).

For the desktop market, I think that CPUs were definitively past that point with the Prescott and Sledgehammer/Barcelona at the latest, so roughly the 90/66nm time frame. It could probably be debated that their predecessors could have already been past there with power virus software, but it was irrevocably past the inflection point at those nodes. That's after the market's willingness to keep bumping TDPs and the massive number of bins and SKUs they used to soak up each part of the yield curve.

My perception of the gap is that the last desktop CPU that could be forced to kill itself thermally was the Athlon XP, whose thermal failsafe required some motherboard support and the thermal diode-prompted shutdown might not be fast enough in the case of the user ripping off the heatsink.
I sadly know from personal experience how much easier it was for the Thunderbird core to kill itself.
The P4, for all the bad press it got for throttling, was the proper first step to having on-chip and autonomous thermal controls that worked faster than the silicon could kill itself.

The most recent and roundly confirmed case of GPUs offing themselves without even touching the cooler was in 2010 when StarCraft 2's menu screen fried GPUs.
I've seen some reports of a driver release causing similar problems in 2013, but I'm not sure that's as definite.
I find it morbidly fascinating that chips with 4-5 times the power draw of a 2000 Willamette and up to 100x the transistor count were killing themselves ten years later.

At least we're finally getting GPU hardware designs that have moved beyond the "might die from rendering furry donuts" stage.
 
At least we're finally getting GPU hardware designs that have moved beyond the "might die from rendering furry donuts" stage.
The issue -IMO- is that the silicon is usually designed to work at suboptimal performance levels, after all a power virus is just a code that maximizes silicon operation .

We could extract much more performance from our silicon, instead we use suboptimal code routines , relax cooling solutions and settle down with half exploited chips that could really do much more, but end in the trash can before being used to their full potential.

The practice of assigning arbitrary so called "TDP" numbers should stop, silicon should be cooled and programmed to it's maximum theoretical performance.
 
I'd move the time frame to say that GPUs have passed that point generations ago, likely before Furmark was even a thing.
That's fair, I'm just noting that finally push is coming to shove more than in the past. i.e. cooling is at its limits for chip sizes/power, throttling is becoming fairly significant.

The notable thing was just how exceptionally primitive they were shown to be, with hacky driver black lists and cards killing themselves on demanding applications (or more recently, StarCraft 2's menu screen).
Yes I remember railing against how unacceptable stuff was there... at the time we got a lot of silly PR replies about how firmark isn't legitimate and so on. Maybe we can get an apology now that they've done what we were saying they needed to do in the first place? :p

The most recent and roundly confirmed case of GPUs offing themselves without even touching the cooler was in 2010 when StarCraft 2's menu screen fried GPUs.
No vsync ftl I guess ;) It is quite expensive and I'm not even really sure why. I guess it just happens to hit a point where most parts of the GPU are near full throughput without any one significant bottleneck.
 
How about ATi address the Blackscreen issue people are having with the 290(X)?
It's a SERIOUS issue, and it's not going away.
Pretty pathetic that folks in the know haven't spoken up yet.

Elpida Ram is being talked about as the culprit, but that's just by Polls....maybe the bios doesn't give enough/too much V to Elpida, or timings are bad....whatever....

Top shelf card that can't even be used because of a black screen?

Unacceptable, and the ATi SILENCE is deafening.
 
Let's dial it back a few notches muzz...

Really?

Have you been perusing the forums?
I take it the answer is no.

Have you seen the MASSIVE anger because of the fact that folks that paid TOP $ for a product that doesn't work?
Trying this/that/other 20 things thing in HOPES that they can salvage their purchase, and a product actually WORKS..


The Horror!

WHO DOES THAT?

I've seen ridiculously bogus stuff in this thread, fighting over idiocy that means NADA, NOTHING, sword fighting...
No threats, no " Dial it down" BS that you are throwing out there, and their arguments were rehashed GARBAGE from years ago....

This isn't that, this is REAL, and if you think it's not enough to be angry about than fine, stick your head in the sand and don't pay attention.....I will bring this to the front here and elsewhere, and you don't feel it is worthy, then I won't say something that'll get me banned, but you can bet it would be pretty vicious.

Wanna avoid it, wanna protect Wavey? Go ahead, this is getting posted everywhere, because it's a SERIOUS problem.

Good Day
 
Back
Top