AMD: Speculation, Rumors, and Discussion (Archive)

Status
Not open for further replies.
I think there are some pretty unrealistic expectations regarding Polaris 10 in this thread.
Polaris 10 is a 232mm^2 chip
That same chip would be larger than 500mm^2 on 28nm, even without clock bumps it should perform faster than a 390x by a significant margin. Given how a 1070 only seems to be 3/4 of a 1080, it should perform about the same.
 
why would it be significantly faster than a 390x, just because the die size is larger doesn't mean its perf/mm is more if they were on the same process, granted through put will be more but ALU amounts are less on Polaris 10 compared to the 390x from most of the rumors we have seen.

nV's gp104 clock speeds are much higher and recover the decrease of the ALU amounts. So without a similar clock rate boost......
 
Last edited:
why would it be significantly faster than a 390x, just because the die size is larger doesn't mean its perf/mm is more if they were on the same process, granted through put will be more but ALU amounts are less on Polaris 10 compared to the 390x from most of the rumors we have seen.

nV's gp104 clock speeds are much higher and recover the decrease of the ALU amounts. So without a similar clock rate boost......
Unless you expect less performance/mm for some reason. I don't see how that would make any sense.

Also less ALUs compared to what?
 
Polaris 10 to a 390x, 390x has 2816 ALU's, Polaris 10 rumors put it at 2560 ALU's.

Also transistor counts for a 390x is 6.2 million, It looks like Polaris will have around ~8.6 million that puts it at Fiji level transistor counts (granted you have to reduce that amount when comparing to Fiji due to the HBM memory controller size).

Means they put much more transistors in the front end changes.

Going by this, they need a similar clock level as Pascal to get to the 1070/1080 performance level. IF they are stuck at what they are now, you are going to get r390ish levels maybe a bit more.
 
Last edited:
Lets be honest,
using 4k as a performance point is mostly irrelevant because the fps and frame times are just too weak.
Case in point, look at Witcher 3 at 4k or as a more equal playing field but does not push the boat out GTA V.
GTA V is still not acceptable in terms of playability at 4k.

From a performance mark perspective, it may be useful to compare an architectures performance trend as it goes up the resolutions including 4k.
I appreciate the argument changes slightly if playing in mGPU, but this adds a whole other set of variables that skew calculating a GPU performance.
Cheers

Do 1440p, the positions of Fury X and 980Ti interchange but the performance delta remains same respectively. And as I said, lower resolution would likely help it more compared to the bigger cards on 28nm. So it should clear Titan X too.

4k performance is a problem because the sites turn up all the knobs, otherwise dropping one or two quality settings from ultra can make a lot of difference.
 
As if on cue,

http://videocardz.com/59808/amd-vega-gpu-allegedly-pushed-forward-to-october

http://www.3dcenter.org/news/amd-zieht-den-vega-launch-angeblich-auf-oktober-2016-vor

Release together / close with BF1. You will find me act a fool, but I can not tell you where the info comes
eek.gif
However, AMD seems to have been surprised by the clock speeds of Pascal
 
Polaris 10 to a 390x, 390x has 2816 ALU's, Polaris 10 rumors put it at 2560 ALU's.
The small performance gap between the 390X and 390 suggests all 2816 ALU's were not being used to their full potential, that there were bottlenecks elsewhere in the design. Until we see what architectual changes have been made in Polaris it is premature to draw a direct comparison.
 
Polaris 10 to a 390x, 390x has 2816 ALU's, Polaris 10 rumors put it at 2560 ALU's.

Also transistor counts for a 390x is 6.2 million, It looks like Polaris will have around ~8.6 million that puts it at Fiji level transistor counts (granted you have to reduce that amount when comparing to Fiji due to the HBM memory controller size).

Means they put much more transistors in the front end changes.

Going by this, they need a similar clock level as Pascal to get to the 1070/1080 performance level. IF they are stuck at what they are now, you are going to get r390ish levels maybe a bit more.

Your ignoring all the shader changes, how much of an effect and how often does the pathological worst case or just general bad cases of shader utilization/occupancy effect frame rate? How much on minimum frame rates, how many on avg and how many on max. What would the average frame rate look like if the min frame rate is much closer to the max frame rate compared to traditional GPU's. How would you judge which GPU is better, one with higher max or the one with higher mins.
 
That same chip would be larger than 500mm^2 on 28nm, even without clock bumps it should perform faster than a 390x by a significant margin. Given how a 1070 only seems to be 3/4 of a 1080, it should perform about the same.

You're assuming Hawaii's GCN2 == Polaris' GCN4 regarding die area for the same execution units and other resources.
By that logic, a 380X Tonga should have about 82% the performance of a 390X Hawaii, since it's 360 vs. 440mm^2. You'll see no such thing.

Apart from the transistors that are being spent in order to improve power consumption and the ones being used for an upgraded video codec engine, AMD has claimed they took advantage of the new process node to redesign a substantial part of the graphics pipeline itself.
 
Do 1440p, the positions of Fury X and 980Ti interchange but the performance delta remains same respectively. And as I said, lower resolution would likely help it more compared to the bigger cards on 28nm. So it should clear Titan X too.

4k performance is a problem because the sites turn up all the knobs, otherwise dropping one or two quality settings from ultra can make a lot of difference.
Look at TPU and they are using a reference 980ti (not a custom AIB), also AIB to AIB using pcgameshardware shows similar trend that at 1440p 980ti is at its optimum.
http://www.techpowerup.com/reviews/AMD/R9_Fury_X/24.html

At pcgameshardware that is a reference Fury X so allow a little bit for custom AIB.
However to stress, you would get skewed results comparing a reference 980ti to a reference Fury X as their performance windows and headroom are very different.
Will need to scroll down to the window showing performance results and can select resolutions - I use the translation option setting in Chrome to read this site.
http://www.pcgameshardware.de/Far-Cry-Primal-Spiel-56751/Specials/Benchmark-Test-1187476/
http://www.pcgameshardware.de/The-Division-Spiel-37399/Specials/PC-Benchmarks-1188384/

While The Division is perceived to be an NVIDIA game, it runs well on AMD's top hardware, but my point is showing how 1440p is optimum resolution and things break down with 4k; both from the perspective of comparing different hardware where it can be seen behaviour changes and skews comparisons (agreed ok if looking to see behaviour of an architecture or if intend to play at 4k but leads to next point) and also the fact it is mostly unplayable/weak frame times with single card GPUs.

4K really has its limited uses if using it in technical discussion comparing different architectures and bit-bus/bandwidth.
None of these cards yet are really designed for 4k from a R&D/project scope both from hardware, nor are games yet (if so they would be looking at developing technology in rendering engines to make 4k more feasible).

Cheers
 
None of them picking up on the potential challenges/cost associated with HBM2 that early, funny a lot of articles did though for the NVIDIA P100 :)

NVIDIA is getting their HBM2 memory from Samsung that went into mass production near end of January.
The same 4GB HBM2 from SK Hynix is not going into mass production until Q3 (from SK Hynix).
Logistics wise, I doubt AMD will be able to get much of the Samsung factory due to Tesla now being in production and NVIDIA probably having a priority contract and actually buying now.
So back to Hynix, and all the costs and ramp-up involved that Samsung went through much earlier.

If AMD do go with HBM2 then supply and cost is going to be a big factor to Vega.
Which is why I wonder if there is a backup plan to go with GDDR5X (a headache for AMD but they may had done this, and also maybe NVIDIA if they intend/option to release a model above GP104 variant early as well).
Cheers
 
Last edited:
Look at TPU and they are using a reference 980ti

Look at my first post again, I mentioned it there. As for the rest of your post, these companies tout their cards for 4k. You might have a point with 1440p being better for comparisons, but go down the route you just went to.
 
Bad news for Polaris if true
AMD's partners "won't have any new cards to display at Computex and the only Polaris cards promoted to them from AMD are R9 390/390X performance class but for a mid-range price. Great value but no sign of any GTX 1080 contender". I thought I'd reach out to an AMD partner, and received a quick "don't think so" from a board partner, and then I asked for clarification to which they said "as of now, no information".
More at the source: http://www.tweaktown.com/news/52056/rumor-amd-partners-showing-polaris-cards-computex/index.html
 
Your ignoring all the shader changes, how much of an effect and how often does the pathological worst case or just general bad cases of shader utilization/occupancy effect frame rate? How much on minimum frame rates, how many on avg and how many on max. What would the average frame rate look like if the min frame rate is much closer to the max frame rate compared to traditional GPU's. How would you judge which GPU is better, one with higher max or the one with higher mins.


I'm not ignoring that, thats why I stated it could be faster than a 390x if its at the same clocks, but I don't expect it to be much faster, remember what AMD stated about the 30/70 split for perf/watt when it came architecture and node respectively, that means they are getting some performance from architectural changes, but its not a whole lot.
 
Look at my first post again, I mentioned it there. As for the rest of your post, these companies tout their cards for 4k. You might have a point with 1440p being better for comparisons, but go down the route you just went to.
I think it is mostly AMD that tout the 4k benchmark which makes sense to show it at most competitive for them, but if either company mention 4k they carefully ignore that games and hardware currently are not really designed to run at that resolution just yet, especially when looking at the latest next-gen games.
It is not just the average fps, but consistency and frametimes, which are all over the place with the most modern games at 4k with good settings.
Cheers
 
There is no „designed for 4k“. The only thing that's necessary is enough graphics memory to hold the frame buffers and a display engine that can output 4k. Everything else is just performance (and yes, that's including graphics memory) - and that's entirely dependant on the game being tested/played.
 
Well the price bracket-performance of the 970 tier seems to be the sweet spot; NVIDIA has 5% footprint in that compared to 1% of 980s.
I can see the worry from AIB partners is that they do like to also present an enthusiast card as well as this gives them more room to play with the performance/components used/etc and allows a stronger narrative with the company's ethos and design.
I guess they are also concerned being stuck with a mass of mid-range previous gen cards again and the headache this had on pricing/margins.
Cheers
 
There is no „designed for 4k“. The only thing that's necessary is enough graphics memory to hold the frame buffers and a display engine that can output 4k. Everything else is just performance (and yes, that's including graphics memory) - and that's entirely dependant on the game being tested/played.

And the graphics settings the game is being played at. People have been playing at conparable resolution on multiple monitors for years now even on single GPUs so for someone to say that neither the hardware nor software is designed for it is a bit silly.
 
There is no „designed for 4k“. The only thing that's necessary is enough graphics memory to hold the frame buffers and a display engine that can output 4k. Everything else is just performance (and yes, that's including graphics memory) - and that's entirely dependant on the game being tested/played.
I would expect at some point engines would need to do some interesting rendering optimisation to maintain 60fps on a next-gen game at 4k, or AMD/NVIDIA are doing for VR to maintain 90fps.
4K IMO is not really on their radar at the moment apart from just offering as another resolution option.
Cheers
 
And the graphics settings the game is being played at. People have been playing at conparable resolution on multiple monitors for years now even on single GPUs so for someone to say that neither the hardware nor software is designed for it is a bit silly.
Apart from we are only now just seeing a solution from NVIDIA (in Pascal) to correct perspective of using 3 monitors involving additional hardware and software with the most optimal approach to performance :)
Of course this is also applicable to VR and maybe the driving force for the technology, but also possible the professional world as well.
Cheers
 
Status
Not open for further replies.
Back
Top