AMD: Speculation, Rumors, and Discussion (Archive)

Status
Not open for further replies.
I am curious if AMD is able to move as much product as the price suggests.

At least until a GP106 comes out, the typical inverse relationship between price and volume might insulate Polaris from GP104.
Nvidia currently can't satisfy the volume demands of the price tiers it is selling at, and it's high enough that I wonder if even if there were a further cut-down SKU below the 1070, whether it would hit a pricing tier (and its volume) that would absorb product and Nvidia's attention before getting to the 480.

Shouldn't Nvidia's yields be decent by now for GP104? Is there enough for a salvage SKU left over after selling in the higher tiers, if they aren't cutting them out of the wafer with a Dremel?

Could, looking at their PR numbers, just be an extreme bin of the chip. Their claimed increase in power efficiency (up to anyway) goes way beyond the RX 480 (which is a prefix we've never seen before). Theoretically the RX series could be high voltage low timing chips, that under previous nodes would be worthless enough to toss. But under this node might still be salvagable as some sort of cheap low end bin. They'd otherwise toss these chips as a loss anyway, so why not use them to undercut the competition? AMD already had a weird bin last gen with the Nano, which was a low voltage low timing chip bin that so far as I know was never tried before (at least as a Desktop oriented bin).

Either that or AMD managed to produce complete garbage out of both the finfet node advance and and architectural advance. At this point I'd not hazard a guess.
 
Could, looking at their PR numbers, just be an extreme bin of the chip. Their claimed increase in power efficiency (up to anyway) goes way beyond the RX 480 (which is a prefix we've never seen before). Theoretically the RX series could be high voltage low timing chips, that under previous nodes would be worthless enough to toss. But under this node might still be salvagable as some sort of cheap low end bin. They'd otherwise toss these chips as a loss anyway, so why not use them to undercut the competition? AMD already had a weird bin last gen with the Nano, which was a low voltage low timing chip bin that so far as I know was never tried before (at least as a Desktop oriented bin).

Either that or AMD managed to produce complete garbage out of both the finfet node advance and and architectural advance. At this point I'd not hazard a guess.


Um the last cards were the R9 series... This is the RX series , X is the Roman Numeral for 10.

The card uses less than a 150watts of power , I wonder if we will see a card with another power connect that clocks higher and competes with the 1070 they have quite a bit of room pricing wise to do so.
 
9 > X/10 isn't relevant, R9 was used for 2 gens along with R7, R5 etc (also the rebrand mobiles at least will still use Rx rather than the new RX)
 
9 > X/10 isn't relevant, R9 was used for 2 gens along with R7, R5 etc (also the rebrand mobiles at least will still use Rx rather than the new RX)
wouldn't consider it two gens as it seemed to be a lot of rebrands and half steps
 
wouldn't consider it two gens as it seemed to be a lot of rebrands and half steps
Yes, there were rebrands, but there was still 2 new generations (2nd and 3rd gen GCN) introducted during those "marketing gens"
 
If the GPUs in Rajas AotS demo were only 50% saturated (what? their AC engines? Their shader cores? Their ROPs?) [...]

Sorry for self quote, but there's been shed some light on this by Mr. Hallock at reddit:
https://www.reddit.com/r/Amd/comments/4m692q/concerning_the_aots_image_quality_controversy/
"In Game Settings for both configs: Crazy Settings | 1080P | 8x MSAA | VSYNC OFF
[...]
Benchmark results:
2x Radeon RX 480 - 62.5 fps | Single Batch GPU Util: 51% | Med Batch GPU Util: 71.9 | Heavy Batch GPU Util: 92.3% GTX 1080 – 58.7 fps | Single Batch GPU Util: 98.7%| Med Batch GPU Util: 97.9% | Heavy Batch GPU Util: 98.7%"

He goes on to claim the GTX 1080 is not doing the work it is supposed to:
"At present the GTX 1080 is incorrectly executing the terrain shaders responsible for populating the environment with the appropriate amount of snow. The GTX 1080 is doing less work to render AOTS than it otherwise would if the shader were being run properly. Snow is somewhat flat and boring in color compared to shiny rocks, which gives the illusion that less is being rendered, but this is an incorrect interpretation of how the terrain shaders are functioning in this title.

The content being rendered by the RX 480--the one with greater snow coverage in the side-by-side (the left in these images)--is the correct execution of the terrain shaders.

So, even with fudgy image quality on the GTX 1080 that could improve their performance a few percent, dual RX 480 still came out ahead. "

Unfortunately, the build of AotS they use is not available yet at my preferred place, gog.com. There it's only 1.12.19917.
 
So the graph they showed in the conference is measuring scaling %

One 1080 has 100% performance scaling above one 1080. Two RX480 have 51% performance scaling above one RX480.

That would put one single RX480 with 41FPS in that bench, some 43% slower than a 1080 while costing 66% less.

The 1070 is about 22% slower than a 1080 in Ashes, so it would score about 47FPS in that bench, some 14% faster than a rx480 while costing 90% more, or inversely, the RX480 is 13% slower than a 1070 while costing 47% less.


Sounds like a winner
 
Robert also clarified what 51% utilization means. It's 151% scaling of a single GPU.
That's what I am asking myself... this measure is nowhere given in the regular AotS version, that's the most recent on gog.com. Instead, from the context of the line, it's the "Percent GPU Bound" for "Normal Batches" - at least that's what is there in the publicly available AotS version.

I'm curios if their newer build changes that.

edit:
Downloaded the latest version on steam - there it's already at 1.12.19928! So it's gog.com trailing behind by a few days.
 
Last edited:
That's what I am asking myself... this measure is nowhere given in the regular AotS version, that's the most recent on gog.com. Instead, from the context of the line, it's the "Percent GPU Bound" for "Normal Batches" - at least that's what is there in the publicly available AotS version.

I'm curios if their newer build changes that.

Yeah no idea where they got that. I would also like to know if there is any truth to Pascal not rendering AotS correctly.
 
He goes on to claim the GTX 1080 is not doing the work it is supposed to:
"At present the GTX 1080 is incorrectly executing the terrain shaders responsible for populating the environment with the appropriate amount of snow. The GTX 1080 is doing less work to render AOTS than it otherwise would if the shader were being run properly. Snow is somewhat flat and boring in color compared to shiny rocks, which gives the illusion that less is being rendered, but this is an incorrect interpretation of how the terrain shaders are functioning in this title.

Any chance PCGameshardware will revisit AoTS and specifically look at its behaviour on 1080?
I have to say I thought the 1080 image looked like it was using different shading (and better IMO but comes down to subjective preferences), not due to the snow but some areas of rocks where both screens had less snow, but could be other reasons as well.
Sort of visually reminded me a bit of the Ambient Occlusion difference in Rise of the Tomb Raider and a few other games using HBAO+.
It will be interesting to see if Oxide join in this debate about the 1080 doing less work due to incorrect rendering.

Raises questions; Is this just the 1080, or Maxwell 2 behaving the same regarding shaders running 'incorrectly', something native with AO,etc; so would be an interesting article.
Did they also use the same game version on both as it seems the database for Polaris has the older version of game only?
The livestream is a good place to look.
20min50sec. :http://www.amd.com/en-us/who-we-are/corporate-information/events/computex
Raja mentions the one on the right is the 1080 (21min13sec),
Anyway in the livestream the 1080 is the one with less snow, and seemed verbally that is what Hallock was also inferring so no idea what happened in these latest image responses in reddit.

But looks like some reddit user/s swapped them?
AMD-Radeon-RX-480-CF-vs-GeForce-GTX-1080-2-900x258.jpg


Anyway if 1080 is the one with less snow, does remind me a bit of the HBAO+ type influence Nvidia creates in some games when looking at areas not heavily affected by snow (excluding those areas).
Cheers
 
Last edited:
Well, in the video, there are definitve differences. But given how for example physics shaders go out of control on sufficiently fast hardware (see the flags test in 3DMark Vantage oder the butterflies in 3DMark 2001 SE Nature on hardware 1 or 2 generations after their respective release), I wouldn't rule out the possibility just yet that there's just a different number of iterations over the seeds leading to different results. But of course, Pascal could not be rendering AotS incorrectly as well.

edit:
WRT to AotS as a benchmark, especially it's integrated conglomerate of wildly different scenes that partly aren't even representative of real gameplay (multiple „chase“ scenes where you'd have a close chase-cam on fighters/bombers - who, 'd want to play a massive strategy game like that??), I'm quite skeptical. And it has a high variance from run to run too.
xQJxYoz.png

This (1080p, high preset, i7-6900K) is only done with Fraps (but the much more complete logs paint a similar picture, they are just a mess to display.) on a card that has a fixed core-clock, so no boost variances.

Maybe once I've played the game a little and created a nice real-world savegame from which to test.


-- sry, had to edit it because of embarassing typos (spelling/grammar) --
 
Last edited:
If the 480 is 15 or even 20% slower than the 1070 then it would be easy to OC to that lvl. Unless AMD had placed the clock really close to the maximum possible...
 
Well, in the video, there are definitve differences. But given how for example physics shaders go out of control on sufficiently fast hardware (see the flags test in 3DMark Vantage oder the butterflies in 3DMark 2001 SE Nature on hardware 1 or 2 generations after their respective release), I wouldn't rule out the possibility just yet, that there's just a different number of iterations over the seeds leading to different results. But of course, Pascal could not be rendering AotS incorrectly as well.

edit:
WRT to AotS as a benchmark, especially it's integrated conglomerate of wildly different scenes that party aren't even representative or real gameplay (multiple "chase" scenes where you'd have a close chase-cam on fighters/bombers - who, 'd want to play a massive strategy game like that??), I'm quite skeptical. And it has a high variance from run to run too.
xQJxYoz.png

This (1080p, high preset, i7-6900K) is only done with Fraps (but the much more complete logs paint a similar picture, they are just a mess to display.) on a card that has a fixed core-clock, so no boost variances.

Maybe once I've played the game a little and created a nice real-world savegame from which to test.
Well AMD said they ran the test 10 times to sum the differences. Although I think we all agreed that these is a best best best case scenario for AMD and thats why they only show it.
 
He goes on to claim the GTX 1080 is not doing the work it is supposed to:
"At present the GTX 1080 is incorrectly executing the terrain shaders responsible for populating the environment with the appropriate amount of snow. The GTX 1080 is doing less work to render AOTS than it otherwise would if the shader were being run properly. Snow is somewhat flat and boring in color compared to shiny rocks, which gives the illusion that less is being rendered, but this is an incorrect interpretation of how the terrain shaders are functioning in this title.

The content being rendered by the RX 480--the one with greater snow coverage in the side-by-side (the left in these images)--is the correct execution of the terrain shaders.

So, even with fudgy image quality on the GTX 1080 that could improve their performance a few percent, dual RX 480 still came out ahead. "

Unfortunately, the build of AotS they use is not available yet at my preferred place, gog.com. There it's only 1.12.19917.

So does this only happen on GTX 1080 or on all Nvidia cards?

I do know that GTX 1080 is having some problems with some games. Many people still can't get Warframe to run on a GTX 1080. Digital Extremes have been working on it trying to get those users to be able to play the game.

Regards,
SB
 
Status
Not open for further replies.
Back
Top