AMD: Speculation, Rumors, and Discussion (Archive)

Status
Not open for further replies.
According to most leaks, P10 seems to score around 390/x in benchmarks. RX 480 is probably? also the full P10-chip which makes it a great upper mid range card at a very good price.

But what about the HighEnd?
Is RTG:s plan to leave the performance/HighEnd completely to Nvidia for a whole 6 months?
Or maybe the GP104:s performance took them by a complete surprise?

Performance/watt seems to be the main focus, not necessary performance.

In an ideal world you would release GPUs for all market segments at the same time. Since it's not, they decided to prioritize mainstream where most of the volume is. Supposedly there's smaller Vega coming in October but I wouldn't count on it.
 
According to most leaks, P10 seems to score around 390/x in benchmarks. RX 480 is probably? also the full P10-chip which makes it a great upper mid range card at a very good price.

But what about the HighEnd?
Is RTG:s plan to leave the performance/HighEnd completely to Nvidia for a whole 6 months?
Or maybe the GP104:s performance took them by a complete surprise?

Performance/watt seems to be the main focus, not necessary performance.

Hard to say if it was set like that from the start, or if they was initially believe that HBM2 will be released faster and had previously think to been able to release a Polaris " high end " + HBM2, or Vega and Polaris in a shorter time frame.

IF ( and that a big IF ), GF 14nm can clock as high as the 1080,... I dont really see any reason to dont release a "slighty bigger" chips at first ( based on Vega or Polaris ) with high clock, as gaming part. But if they had plan to use HBM2 only, well this could explain the delay.
 
Hard to say if it was set like that from the start, or if they was initially believe that HBM2 will be released faster and had previously think to been able to release a Polaris " high end " + HBM2, or Vega and Polaris in a shorter time frame.

IF ( and that a big IF ), GF 14nm can clock as high as the 1080,... I dont really see any reason to dont release a "slighty bigger" chips at first ( based on Vega or Polaris ) with high clock, as gaming part. But if they had plan to use HBM2 only, well this could explain the delay.
The current rumours suggest Vega will be pushed out early, in October
 
Storage fees? They're just setting the standard usable production at fewer functional CUs. If more work, then so be it - laser cut. It's not like it hasn't happened before.

AMD is intent on doing a price war of sorts, remember. More chips that can be used for a product line now. When fully enabled parts crosses some yield threshold (or they're looking at the next year's product line-up), then they'll have something to show to OEMs or to folks holding off.

There's already rumour of "10nmFF" being not much of an upgrade as well. For all we know, it's gonna be another 4-5 years to hit 7nm, and we don't know what sorts of things AMD is planning to do to GCN in the interim. Stretching out the Polaris 10/11 masks for products over the years might be useful for their er... bottom line/budgets.

You touched on an aspect that concerns me from a business perspective.
You rightly mention AMD is sort of creating a price war, at a time when the next die is not expected to provide the same level of gains recently seen.
So what are the repurcussions if a business sells their best improved technology product at a discounted price, that will also compete pretty well against future generation product?
You end up caught in a cycle of depressed prices, great for consumer but not great for a manufacturing business.
Sure they may shift a fair amount for next few quarters but this is offset against what could be achieved with higher margins, the real downside is this acts as a price anchor.

History has shown this where AMD tried to raise the prices of the 390/390X products at launch compared to the heavily discounted earlier models near end of life.
And this situation could get nasty if Nvidia responds with their own competitive price corrections (not talking about the 1% market enthusiast cards), unlikely but if AMD sells well they will adjust the 1070 and lower cards IMO.
So future technologies need greater levels of investment in R&D to gain more performance and efficiency on next 10nmFF, potentially hurting the business even more if the above scenario happens to some extent.
This would be applicable to both companies.
Cheers
 
Those figures do not align well with Videocardz, who state they received their info from an AMD partner and would be either at correct reference clocks or slightly higher.
So who do you trust more, videocardz or Wccftech, and whose figures are closer to what has come out of AMD.
AMD-Radeon-RX-480-3DMark-Fire-Strike-2.png


390 should be 89%
Cheers

I'll believe what is shown in Jun 29 when Nda lifts and all can show.

But again unless they made up the 6pin+6pin and 6pin+8pin, Im seeing some versions of these cards do quite well
 
Not selling things you have is a pretty poor way to make money.
I'm pretty sure we'll know the real deal after Apple has updated their machines with Polaris GPUs, then we'll see if they're getting 36 CU part, or is it like Tonga where they got the first full shader parts
 
If they are anything like the 290x, if you give them voltage, they will scale, and scale, and scale. Two of them @ 1.5vcore can trip one 1200w powersupply. If you gave them a third 8 pin connector, they would use it. The problem is keeping them cool.

I don't see how the RX480 (or any other vidcard) could be any different, unless the pcb components are not built like a tank (like the 290x is).
 
Any idea which article?

Reason I ask is going through an article on Anandtech I found, the load power of a custom AIB OC 270 only has 257W against 267W for an AMD reference 270x,
In Furmark the custom board OC 270 is 33W less.

Hmmm, I honestly can't find it again. It had both a reference 270 and reference 270x, IIRC. I'm guessing I saw it when I was looking up various other card combinations and found it strange. Then again that plays right into what Dave said in that there is much higher variability in the salvage chips than in the full chips used for the higher models.

Another example, I ran across while looking for the article again. R9 280 consumers 14 watts less than R9 280x. R9 290 consumes 6 watts more than R9 290x.

Yeah, my apologies. Either I thought I thought I saw something that I didn't see, or that was one obscure video card review that I can't find again.

Regards,
SB
 
Hmmm, I honestly can't find it again. It had both a reference 270 and reference 270x, IIRC. I'm guessing I saw it when I was looking up various other card combinations and found it strange. Then again that plays right into what Dave said in that there is much higher variability in the salvage chips than in the full chips used for the higher models.

Another example, I ran across while looking for the article again. R9 280 consumers 14 watts less than R9 280x. R9 290 consumes 6 watts more than R9 290x.

Yeah, my apologies. Either I thought I thought I saw something that I didn't see, or that was one obscure video card review that I can't find again.

Regards,
SB
Like you I trawled various reviews but they make it hard to get a true like-for-like from one reviewer :(
Custom AIB models make it so difficult.
I think it was possibly an obscure review and I just missed it.
And the situation is made worst that really the only ideal measurements are when they are done at the terminals and slot.
Thanks
 
Could also be bandwidth limitation not making full chip worth it yet. The question is does polaris support GDDR5x, the obvious reason not to use GDDR5x would be supply at this stage, its looking like to goal of polaris is creating the best possible perf per watt while sliding it right in between what would traditionally be the mainstream (128bit memory bus) and "performance" ( above 128bit memory interface) . As a result it is a high volume card and also ships with 4/8gb, GDDR5x doesn't sound viable at this stage for that number of chips.

I think there is a 50/50 chance of memory shaders + gddr5x support. What we really need is chipworks to get on this ASAP!
 
What if AMD is saving fully enabled parts for next year?

i.e.
480X - 40CU
480 - 36
470X - 32 ???
470 - 28 ???
460X - 20 ???
460 - 16

They'd maybe get more yields out of introducing non-X parts now, and then when the node matures, they can bring out fully operational chips.

One might say it was the reverse of 28nm strategy (aside from tonga situation).

o_O:confused: View attachment 1342

:oops:
next year ? you mean fall ? Small vega should hit in fall which would negate the impact of a 480x with 40cu. I could see if it existed them announcing it in the fall with the vega series ?
 
Why are people so caught up on the steam VR score? They used drivers from January for the test and the test has variation of 20-30% between tests on the same system. I think the only reason AMD put that was to prove it was VR ready but the performance isn't indicative from the score. Based on AMD's benchmarks with the 470, its between 1.6x and 1.75x the 270x based on if you want to include hitman in the average.
perfrel_1920_1080.png
based on this performance summary, it puts the 470 at the 290 to 390 performance. Which means the 480 has to be higher than that.

perfwatt_1920_1080.png
it also puts a performance per watt at 20% better than the 1080 in the best case if the 2.8x holds up.


Probably because the card is pitched towards affordable VR - and I think it will provide that, but at the same time people may be having high expectations for high resolution performance in general -

I have the distinct feeling this card will be ROP limited at 4K and VR if the 6.3 score is any indication - perhaps evn raw Bandwidth.

These will IMO shine at 1080p , but beyond that - maybe a bit iffy, and not compare so favourably to the old Hawaii's

As for old drivers - That's fine, but most of the data for Hawaii cards hitting well over 7, consistently are on old drivers too.
 
Probably because the card is pitched towards affordable VR - and I think it will provide that, but at the same time people may be having high expectations for high resolution performance in general -

I have the distinct feeling this card will be ROP limited at 4K and VR if the 6.3 score is any indication - perhaps evn raw Bandwidth.

These will IMO shine at 1080p , but beyond that - maybe a bit iffy, and not compare so favourably to the old Hawaii's

As for old drivers - That's fine, but most of the data for Hawaii cards hitting well over 7, consistently are on old drivers too.
Yes but you dont really spend 200 dollars expecting to play a 4k, do you?
 
Yes but you dont really spend 200 dollars expecting to play a 4k, do you?
Well, VR has really high requirements:

4K at 30 Hz -> 3840 x 2160 x 30 -> ~250 MPixel/s

Oculus/Vive at 90 Hz -> 1080 x 1200 x 90 x 2 X 1.4 -> ~325 MPixel/s

Note: the 1.4X factor for VR comes from VR best practices demanding super sampling to avoid excessive aliasing in the center of the image due to lens distortion (Pascal can get rid of this factor by using lens matched shading)

In practice there are other tricks a VR app can put in place to avoid filling so many pixels per second but I just wanted to show that the amount of pixels a VR app needs to push per frame is actually similar or larger than what a 4K app at 30 Hz needs.
 
good perf/€, good perf/W, and good performance overall (fast enough for 1080@60 on most games), what else would I need ?
(Maybe a little more if I update my screen to higher definition but I don't see the point atm, it would mean € for the screen, then more € for the GPU then more € for electricity, and more heat, all that for a dubious benefit...)
 
I would like to see some tests of the supposed? improvements in geometry/tessellation engine in Polaris. I think this is a key thing to combat the poor results in gameworks titles, or at least damage control ;)
 
Status
Not open for further replies.
Back
Top