AMD: Speculation, Rumors, and Discussion (Archive)

Status
Not open for further replies.
So, which is it, 150 watts or 100? (And why is there such a range?)

If it's drawing 100 while running 3dMark Firestrike Ultra the RX 480, normalized to the 1080's tdp of 180 watts, gets a benchmark of 6,000 while the 1080 gets 5,000. If it's pulling 150 watts then it's a normalized score of 4,100. Why is there such a huge range, and even how, and is it PR bullshit? I dunno, but there's the math of it.

Intriguing. I wonder how much efficiency it will lose by raising clocks whichever the truth is. The article in question also seems to hint at an 8pin+6pin card, which would suggest it can at least run so high as to require that much power.
 
The worse is not it is slower than a 970, but its TDP is almost the same than a 970 at 28 nm (if a RX 470 is 110 w the RX 480 will certainly be near 150).Architecture efficiency improvement really disappointing. It seems Kyle Bennett had a point.
Still up in the air. As WCCFtech claims it is only about 100W at 60C. It certainly would be amusing if Kyle's "source" gave him was the slide which showed the Rx480 performance where all that it really has is price and "over 5 TFLOP".
But WCCFtech is WCCFtech... So there is that:!:
 
Had to finally sign up to search. But didn't find much.. has there been much word on final core configs?

Thoughts on following?

RX480: 2304:144:32
RX470: 2048:128:32
 
Why are people so caught up on the steam VR score? They used drivers from January for the test and the test has variation of 20-30% between tests on the same system. I think the only reason AMD put that was to prove it was VR ready but the performance isn't indicative from the score. Based on AMD's benchmarks with the 470, its between 1.6x and 1.75x the 270x based on if you want to include hitman in the average.
perfrel_1920_1080.png
based on this performance summary, it puts the 470 at the 290 to 390 performance. Which means the 480 has to be higher than that.

perfwatt_1920_1080.png
it also puts a performance per watt at 20% better than the 1080 in the best case if the 2.8x holds up.
 
Last edited:
The idea I was getting at was take a portion of the controller and be able to up the frequencies to properly interface with GDDR when an interposer isn't present. Use one chip for multiple designs instead of respinning it with a different memory controller.
Look at it this way: HBM is better than GDDR5 for everything except cost and the fact that you have a single chip with different memory sizes.

The moment you're going to add some interface to support HBM, you're making your cheapest, highest volume GDDR5 SKU more expensive. But since you're now also supporting HBM, your complete memory system now needs to be beefed up to support the additional bandwidth as well. Otherwise, there's no point in adding HBM in the first place. (I admit that Fiji shoots a big hole in that last argument.)

It just doesn't make sense to combine the two.
 
Look at it this way: HBM is better than GDDR5 for everything except cost and the fact that you have a single chip with different memory sizes.

The moment you're going to add some interface to support HBM, you're making your cheapest, highest volume GDDR5 SKU more expensive. But since you're now also supporting HBM, your complete memory system now needs to be beefed up to support the additional bandwidth as well. Otherwise, there's no point in adding HBM in the first place. (I admit that Fiji shoots a big hole in that last argument.)

It just doesn't make sense to combine the two.
It does if you want to cut the TDP. I think we will eventually see HBM memories in mid range cards when the prices drop enough. It will allow you to cut the TDP by a good margin and shrink the PCB a little bit more.
 
Otherwise, there's no point in adding HBM in the first place. (I admit that Fiji shoots a big hole in that last argument.)
You know you don't need to use 4 stacks of HBM, don't you?

Only real reason why you won't see HBM on smaller GPUs, is that the interposer itself drives the production costs in multiple ways.
 
You know you don't need to use 4 stacks of HBM, don't you?
Oh my! How did I miss that?

Only real reason why you won't see HBM on smaller GPUs, is that the interposer itself drives the production costs in multiple ways.
Yes. That's exactly the point that I'm making. And if you only need the bandwidth of 1 HBM stack, you might as well just use much cheaper GDDR5(X).
 
It does if you want to cut the TDP. I think we will eventually see HBM memories in mid range cards when the prices drop enough. It will allow you to cut the TDP by a good margin and shrink the PCB a little bit more.
Eventually, as in, 3 or more years from now.

But the discussion was about supporting either GDDR5 or HBM with the same die. I don't think we'll ever see that.
 
Numbers on 3dmark firestrike? 480? perhaps you mean leaks? if it is leaks it is easy simply by forcing it to run at base clock to lower the score.

Or do you mean amd has done some benchmark pr with the 480?

PS

if some of those benches linked are correct, it seems the 480 sli > 1080 sli in ashes.
Those figures do not align well with Videocardz, who state they received their info from an AMD partner and would be either at correct reference clocks or slightly higher.
So who do you trust more, videocardz or Wccftech, and whose figures are closer to what has come out of AMD.
AMD-Radeon-RX-480-3DMark-Fire-Strike-2.png


390 should be 89%
Cheers
 
Last edited:
Are you 100% sure of this?
AotS uses AFRGPU and AsymetricGPU entries in it's settings.ini file. So it should be that. But there was talk earlier, not sure though if it involved Oxide from the top of my head, that you could implement MGPU schemes where one card renders the terrain and the other the moving units for example.

Yes. That's exactly the point that I'm making. And if you only need the bandwidth of 1 HBM stack, you might as well just use much cheaper GDDR5(X).
Not if you're Apple, thus cost basically is a non-issue since your customers would buy at any (reasonable, give or take 200 $) price and you're concerned with space (x,y not z) while wanting to offer higher memory capacities as well. Single stack of HBM gen2 next to your 150-ish mm² GPU and you're set for good performance not IGP can match with a very small cost on space.

This is especially viable in the light of batteries being not only round cells stuffed together but cramped in all over the place.

Confirming that I didn't get the deck until this afternoon. I'm assuming here that AMD had some on-site briefings at E3 for those press who were in attendance.
Was quite surprised as well. No advance warning of press-deck incoming way after working hours for us Euro-people, so...
 
Last edited:
mini-rant:
It doesn't make much sense to have no less than 3 threads about Polaris.. where one is about Polaris in general, the other is about one Polaris 10 (RX 480) and the other is about one Polaris 10 and one Polaris 11 (RX470 + RX460).



Regardless, the RX470 is being touted as the replacement for R9 270X Pitcairn, with unsurprisingly 2x better performance at 2/3rds the power (110W).
http://wccftech.com/amd-radeon-rx-polaris-10-polaris-11-specs-performance/
It makes far less sense to have 1 mega thread that is not readily apparant what is being discussed or that something new and informative has happened. Separate threads for separate products makes perfect sense from an information management perspective.

These threads are about the new products announced. The other main theead is about overall architecture. I know it might be too much to ask of users to actually be organized on discussions, but that is the way forward.

The main thread should naturally die down on its own, but who knows if theres still viable discussion left, like if there is some RX 490 or RX450 type products that remain.

When there are official benchmarks for the cards closer to the official launch date is when you will see new threads created for the different products.
 
Yes it was. I have no idea why since we dont see a thread for 970, 980m980ti.1070,1070FE, etc....
There are separate threads for the separate products when they were announced and when they were reviewed. Because Nvidia doesn't use different architecture for the 2 products is why there is only 1 1080/1070 announcement thread, then there are separate 1080 reviews and 1070 review threads.
 
There are separate threads for the separate products when they were announced and when they were reviewed. Because Nvidia doesn't use different architecture for the 2 products is why there is only 1 1080/1070 announcement thread, then there are separate 1080 reviews and 1070 review threads.
RX 470 and 460 use different architectures though, RX 470 is Polaris 10 like RX 480 is, RX 460 is Polaris 11
 
Last edited:
RX 470 and 460 use different architectures though, RX 470 is Polaris 10 like RX 480 is, RX 460 is Polaris 11
Dont worry, these threads will soon be replaced be separate review threads for each card, so that should sort it out then, hopefully. Or I'll get more spare time and split this thread into rx470 and rx460 products.
 
RX480M with 16 CUs in the presentation's footnote: Polaris P11 for sure (just would not make any sense to disable that much from P10), but all CUs enabled?
 
.....
BTW - I was going to use the 270 versus 270x for AMD but at Anandtech the slower 270 actually consumed more power than the faster 270x.

I think Dave mentioned somewhere before that salvage chips used for lower card tiers often had worse power characteristics than chips used for the intended top end version. Meaning they often had worse perf/watt despite sometimes consuming less power. Or was it that there was far more variability with higher potential for lower perf/watt? It's been a while since I read that post.

So while unlikely, it's possible for 470 to consume more power than 480.

Regards,
SB
Any idea which article?

Reason I ask is going through an article on Anandtech I found, the load power of a custom AIB OC 270 only has 257W against 267W for an AMD reference 270x,
In Furmark the custom board OC 270 is 33W less.

59869.png


The Asus 270 DirectCU II card has a pretty strong default OC as well; notice HIS 270 IceQ in the review is nearly 30W less than the reference 270X.
http://www.anandtech.com/show/7503/the-amd-radeon-r9-270x-270-review-feat-asus-his/16
Cheers
 
Status
Not open for further replies.
Back
Top