AMD: Pirate Islands (R* 3** series) Speculation/Rumor Thread

I have one question for the B3D Gurus, if you don't mind.

I am an avid supporter of dual gpu systems and it has come to my attention that Nvidia has a much lower cpu overhead/usage than AMD. The following numbers show this and keep in mind that these are single gpu results. Dual gpu would be even worse.



I am an active Youtuber and I have recently benchmarked Assassin's Creed Unity on my i5 2500k and 7950s. Allow me to post the benchmarks if you want to make any remarks. (sorry for the camera quality-will go digital soon-also spicy wallpaper alert :p)

[omega] ASSASSIN'S CREED UNITY 1920X1080 CUSTOM ULTRA 7950 @1.1Ghz core i5 2500k@4.8Ghz - 58fps

[omega] ASSASSIN'S CREED UNITY 1920X1080 CUSTOM ULTRA Crossfire 7950 @1.1Ghz core i5 2500k@4.8Ghz - 58fps

Now as you can see, the framerate result was the same and the reason for that is the cpu overhead of the dual gpus.

You can see a 7950 vs 7950cfx side by side run through, which is somewhat at sync after a while, that clearly shows the much higher cpu usage on the top right OSD.

So do you guys think that we will see an improvement in this area in R9-3XX series?

Does this have to do with how the driver is written or are there more implications regarding deeper aspects of the architecture?
 
For a fair comparaison, it will be needed to see too how gamework or Nvidia features impact the driver rewrite.. so if you have a test comparaison with an non Nvidia titles...

Assassin creed will be the worst example, crossfire just dont work in this game.. Your 2500 at 4.8ghz will not bottleneck anything., if you have 58fps with and without CFX, thats just that CFX is not working there.. whats your gpu usage look like ?

( just for tell, i have 2x 7970@1050mhz with an I7 4930K - 16GB 2133mhz C9 )
 
Last edited:
I have one question for the B3D Gurus, if you don't mind.

I am an avid supporter of dual gpu systems and it has come to my attention that Nvidia has a much lower cpu overhead/usage than AMD. The following numbers show this and keep in mind that these are single gpu results. Dual gpu would be even worse.

The numbers dont show that at all. A single 980X outperforms a single 290X in one game and from this you conclude that Nvidia has lower CPU overhead?
I am an active Youtuber and I have recently benchmarked Assassin's Creed Unity on my i5 2500k and 7950s.

Now as you can see, the framerate result was the same and the reason for that is the cpu overhead of the dual gpus.

Again..this dosen't mean that its due to cpu overhead. As lanek says..if the framerate is the same that means Crossfire is just not working in this title and in situations like this it is even possible that the dual gpus get a lower framerate than the single gpu. I would suggest you read some detailed reviews and especially over a wider selection of games before drawing any conclusions.
 
A single 980X outperforms a single 290X in one game and from this you conclude that Nvidia has lower CPU overhead?
I think he concluded that from the marginal difference between the two in scaling. Considering how many more frames the 980 renders, on equal terms it should also show greater dependance on CPU. But indeed, it is but one game and with Gameworks.
 
For a fair comparaison, it will be needed to see too how gamework or Nvidia features impact the driver rewrite.. so if you have a test comparaison with an non Nvidia titles...

Assassin creed will be the worst example, crossfire just dont work in this game.. Your 2500 at 4.8ghz will not bottleneck anything., if you have 58fps with and without CFX, thats just that CFX is not working there.. whats your gpu usage look like ?

( just for tell, i have 2x 7970@1050mhz with an I7 4930K - 16GB 2133mhz C9 )

On my Unity benchmarks, I had disabled the Gameworks settings in the options.

My CPU was running at 100% in crossfire, while it was around 55-85% in single gpu.
 
The numbers dont show that at all. A single 980X outperforms a single 290X in one game and from this you conclude that Nvidia has lower CPU overhead?


Again..this dosen't mean that its due to cpu overhead. As lanek says..if the framerate is the same that means Crossfire is just not working in this title and in situations like this it is even possible that the dual gpus get a lower framerate than the single gpu. I would suggest you read some detailed reviews and especially over a wider selection of games before drawing any conclusions.

I've been using crossfire since the 4870X2. I know what to expect.

This a cpu limited case. There's a reason I posted the video links. I have OSD on the videos that shows valuable info.

As I said to Lanek, the cpu usage on the crossfire video was very close to 100% most of the time, while it was much lower with the single card.

Still my main question is, why Advanced Warfare runs at 45fps with the fx6100+290x, while it hits 72fps on the 980 with the same cpu. Sure the 290X can reach 72fps. It's the cpu that it's holding it back and my question is why and also why the 980 scores higher with the same cpu? It's that fine detail I don't understand. Shouldn't they both be rendering at the same speed with the lower end cpu?
 
https://www.linkedin.com/pub/linglan-zhang/5/75b/4b0
 Developed the world’s first 300W 2.5D discrete GPU SOC using stacked die High Bandwidth Memory and silicon interposer

https://www.linkedin.com/in/ilanashternshain
• Backend engineer and team leader at Intel and AMD, responsible for taping out state of the art products like Intel Pentium Processor with MMX technology and AMD R9 290X and 380X GPUs.

Nice find. I guess HBM is confirmed for AMD's next high-end GPU (Fiji?) then. For what it's worth, the second LinkedIn profile link also mentions this:

AMD R9 380X GPUs (largest in “King of the hill” line of products)
 
300W - sticking with 28nm?

Edit - second link says:

Experienced with broad range of process technologies including latest 14nm node.

14nm synthesis optimization research; working closely with FE netlist owners to optimize synthesis options for place and route of newly introduced 14nm process.

Bit of a jump to go straight into a 300W 14nm part but it might be something.
 
It's 28nm 99.99% surely, 0.01% 20nm possiblity, 14 nm no way in hell

14nm for GPU in the early part of the year, yeah, no way. But, they probably already taped out a test chip or an early rev of a 14nm GPU already, even if it won't be commercialized until 2016 (maybe Q4 2015 if we're really lucky)
 
(Putting unreleased products on your public resume: the best way make recruiting companies question your judgment just when you might need it the most.)

300W: either AMD didn't figure out a way to cut back on power despite using HBM, or this chip is insanely large and high performance.
 
Last edited:
(Putting unreleased products on your public resume: the best way make recruiting companies question your judgment just when you might need it the most.)
AMD and ex-AMD employees seem to do this a lot.

300W: either AMD didn't figure out a way to cut back on power despite using HBM, or this chip is insanely large and high performance.

They already have the power bands they are comfortable with targeting, so it would be leaving performance on the table even if there were a major increase in efficiency.
Barring a major architectural revamp, I would expect AMD needs all the performance it can get. HBM should provide a significant one-time benefit in shaving off a fraction of the board's consumption that can be transferred to the GPU, but I would rather wait for actual measurements to see if it is sufficient to overcome GCN's tendency to consume more power relative to the competition.
Existing GPUs can have their power ramp above 300 W, so even if it were an existing design pasted onto an HBM controller, it would have little difficulty absorbing all the spare wattage.
 
I agree that it makes sense to not leave performance on the table, to a certain extent: it really depends on the incremental cost to do that (cooling, regulators etc.) I have not idea. But we already saw this water cooler a couple of months ago, which can't be too cheap. Add to that the extra cost of HBM and it better be a whole lot faster than gm200.
 
I agree that it makes sense to not leave performance on the table, to a certain extent: it really depends on the incremental cost to do that (cooling, regulators etc.) I have not idea. But we already saw this water cooler a couple of months ago, which can't be too cheap. Add to that the extra cost of HBM and it better be a whole lot faster than gm200.

I wouldn't expect GM200's power draw to be all that different from this future AMD GPU.
 
I wouldn't expect GM200's power draw to be all that different from this future AMD GPU.
A GTX980 has a TDP of 165W. Add 50% and you're at 247W. That doesn't mean it won't exceed that number when over clocking, just like a GTX980, but it's the number they can design for and market with.

300W is not impossibly hard in any way, but AMD doesn't have the luxury of being able to overprice.

HBM, interposer, costlier cooling: theirs will be a very costly solution compared to a gm200 that uses plain vanilla solutions. It better be significantly faster.
 
A GTX980 has a TDP of 165W. Add 50% and you're at 247W. That doesn't mean it won't exceed that number when over clocking, just like a GTX980, but it's the number they can design for and market with.
GTX 980's TDP is 180W, not 165W. I have no clue why they use 165W in marketing when even the default reference bios says 180W
 
GTX 980's TDP is 180W, not 165W. I have no clue why they use 165W in marketing when even the default reference bios says 180W

Based on the review, i will say the 165W is an average TDP calculated over different situation or games.
 
Last edited:
Back
Top