AMD: Pirate Islands (R* 3** series) Speculation/Rumor Thread

Has anyone seen the aircooled Fury non-X anywhere?
I'm really curious as to how long that card will be with an aircooler. I assume the PCB will be as small as the Fury X or the Nano, but the cooler should make it longer.

I think the same until they go the 2 1/2 - 3 slots roads witth 2 large fans( bigger pipes, 2 pass rad ). But yes, not seen anything yet.
 
If so, then TPU graphs show some disparity to AMD's slide. Do you see that TPU system shows significantly lower results?

I not sure to understand what you mean.. the AMD slide show 54fps average and 43minimum fps for the Fury at 4K ultra setting ( just for show that the minimum and average are close so, it should result on smooth gameplay, no big deep dive framerate to 12-15fps ).. The TPU review show 43 average fps for the 980TI gaming OC version ( they dont put minimum fps )... If comparable, it is like the 980TI overclocked version have the same average fps than the minimum fps of the AMD one.

Ofc the problem is we dont know what setting or game / benchmark part are used. so hard to compare both score and got real figure of the real performance of Fury.

Lol, im pretty sure that the result on PCcars will be way different .
 
Last edited:
The settings should be the same - both maxed out at 4K.

It sounds quite good that they managed to improve so much - 28 FPS at TPU with R9 290X and 54 FPS at AMD with R9 Fury X.

Well, i have a doubt, because if we compare the score of TPU for the standard 980TI and TitanX with that, we find a 50% better performance for the FuryX and a nearly 200% increase, as you have noted it, compared to the 290x .. that is really a lot.
I can maybe imagine, in this specific game, at this specific resolution, Fury will be faster ... but by 50% ( or 1.5x ) performance, it is too much.

Its why im really tempted to temper this a little bit. If AMD have put the performance of their competitor in this slide and it was show the same thing i will say ok, but we compare a review made by TPU with a slide that i assume not faked.. maybe they have not even benchmark the same scene, or not use the ingame benchmark and so score will differ.

I will love it, but im a bit too much realist for believe this will be the case.
 
Last edited:
Well, i have a doubt, because if we compare the score of TPU for the standard 980TI and TitanX with that, we find a 50% better performance for the FuryX and a nearly 200% increase, as you have noted it, compared to the 290x .. that is really a lot.
I can maybe imagine, in this specific game, at this specific resolution, Fury will be faster ... but by 50% ( or 1.5x ) performance, it is too much.

Its why im really tempted to temper this a little bit. If AMD have put the performance of their competitor in this slide and it was show the same thing i will say ok, but we compare a review made by TPU with a slide that i assume not faked.. maybe they have not even benchmark the same scene, or not use the ingame benchmark and so score will differ.

I will love it, but im a bit too much realist for believe this will be the case.

AMD have had big overhead on DX11 so the improvement seems plausible to me.
But dx11 is not interesting anymore DX12 is.
windows 10 is about to be the big PC gaming platform using DX12
 
AMD have had big overhead on DX11 so the improvement seems plausible to me.
But dx11 is not interesting anymore DX12 is.
windows 10 is about to be the big PC gaming platform using DX12

This will imply they have fix it, but i dont think the overhead wass due by hardware, in most cases, this overhead was there on Nvidia collaboration games, AMD driver need to solve too much things on thoses games, instead of just do their work of bridge the game engine and the gpu.. Nvidia have ofc work the question, i dont deny that in general the Nvidia driver have lower overhead in DX11, and this have shown good result, but, i really dont think that the AMD driver overhead was so abnormal in global situation. Because there's no relation with hardware, only on the code of the game. GCN is not really known for provide a worst overhead that the Nvidia architecture, nothing in the hw could make think that.

Lets take the case of PCcars, what AMD could do on the driver side, when the game run at 71fps on a GTX680, and 31fps on a 290x ... lets imagine they arrive to provide a driver who solve every problem for attain 70fps in this game as the developper dont seem really in urgence to provide a patch for it, how will look the overhead of this driver ? Basically they will need to provide a driver who is able to correct everything who cause problem, basically debug the game with it for been able to double the performance on this game. personally, i will think this is a impossible mission.

( PCcars is not considered as a "nvidia" game, but for be honest im in trouble with this, so im sorry if i bring this here, we have been thousands and thousands of AMD gpu's user to test this game during 4 years, kickstarters funders we have got every version of the game, we have test them, we have report our performance, our problem, we have never seen something approaching as what happend at the release of the game, at some period AMD gpu's was even perform better in this game than their Nvidia counterparts.... we had driver for thoses tests, we had return by the developpers on performance side, when we was find bad performance, in the next 5 days we had a new build who was solve the performance problem ... but the result is absolutely unbelievable .. )
 
Last edited:
This will imply they have fix it, but i dont think the overhead wass due by hardware, in most cases, this overhead was there on Nvidia collaboration games, AMD driver need to solve too much things on thoses games, instead of just do their work of bridge the game engine and the gpu.. Nvidia have ofc work the question, i dont deny that in general the Nvidia driver have lower overhead in DX11, and this have shown good result, but, i really dont think that the AMD driver overhead was so abnormal in global situation. Because there's no relation with hardware, only on the code of the game. GCN is not really known for provide a worst overhead that the Nvidia architecture, nothing in the hw could make think that.

Lets take the case of PCcars, what AMD could do on the driver side, when the game run at 71fps on a GTX680, and 31fps on a 290x ... lets imagine they arrive to provide a driver who solve every problem for attain 70fps in this game as the developper dont seem really in urgence to provide a patch for it, how will look the overhead of this driver ? Basically they will need to provide a driver who is able to correct everything who cause problem, basically debug the game with it for been able to double the performance on this game. personally, i will think this is a impossible mission.

( PCcars is not considered as a "nvidia" game, but for be honest im in trouble with this, so im sorry if i bring this here, we have been thousands and thousands of AMD gpu's user to test this game during 4 years, kickstarters funders we have got every version of the game, we have test them, we have report our performance, our problem, we have never seen something approaching as what happend at the release of the game, at some period AMD gpu's was even perform better in this game than their Nvidia counterparts.... we had driver for thoses tests, we had return by the developpers on performance side, but the result is absolutely unbelievable. )

so your saying nvidia will now pay the developer to add 6gb needed textures to fix Fury´s performances vs their own card 980ti, ouch.
 
so your saying nvidia will now pay the developer to add 6gb needed textures to fix Fury´s performances vs their own card 980ti, ouch.

I dont say that... i will not enter this question.. I dont think this is the case in PCcars, well pccars is a special case, as it was a kickstarter project, we have got access to every build, and collaborate with them ( us gamers, funders ) who suddenly have become a game production project. And suddenly the relation between the funders, the gamers and the developpers have totally change..

But in the games production if we want to take an example, you have 2 big studios, Electronics Art, and UbiSoft, EA work with AMD, Ubisoft work with Nvidia.. Ubi and EA are in competitions since, well nearly since they have been created.. just to point that things are a bit more complicated that just pay or not pay. Relationships can be strange sometimes and dont forcibly imply money. But thats another topic.. lets not go down this road right now here.
 
Last edited:
I dont say that... i will not enter this question.. I dont think this is the case in PCcars, well pccars is a special case, as it was a kickstarter project, we have got access to every build, and collaborate with them ( us gamers, funders ) who suddenly have become a game production project. And suddenly the relation between the funders, the gamers and the developpers have totally change..

But in the games production if we want to take an example, you have 2 big studios, Electronics Art, and UbiSoft, EA work with AMD, Ubisoft work with Nvidia.. Ubi and EA are in competitions since, well nearly since they have been created.. just to point that things are a bit more complicated that just pay or not pay. Relationships can be strange sometimes and dont forcibly imply money. But thats another topic.. lets not go down this road right now here.


No Lanek, the same behavior with pccars in Dx11 is seen in other games even AMD sponsered games, this is a driver related issue.


There is higher cpu overhead on AMD cards with Dx11 drivers.
 
Last edited:
No Lanek, the same behavior with pccars in Dx11 is seen in other games even AMD sponsered games, this is a driver related issue.

I have not say it was not the case.

Do you really think, that a game who have been tested during 4 years on AMD GPU's have a 200% difference on the end of performance vs a 780 ( not even TI, 290x vs 780 ) due to a driver overhead ? ..

at 1080p, 1440p 4K, with a 12 threads 4930K ... i will like to know where was this driver overhead when my 7970 was in paar with a 770 in this game during thoses 4years.

I dont know how much drawcalls is using Pccars, but seriously.. But we are compltely off the topic, and this discussion should lie in another topic.

As i said, PCcars, could be a study on his own.. this game have been extensively tested by the community, funders, gamers, during more than 4 years, with their own setup and hardware who was include ofc, high cpu, low cpu, high end gpu#s from AMD and Nvidia .
 
Last edited:
I have not say it wass not the case, i have say, globally, it was not much a problem on performance..

Do you really think, that a game who have been tested during 4 years on AMD GPU's have a 200% difference on the end of performance vs a 780 ( not even TI, 290x vs 780 ) due to a driver overhead ? ..

at 1080p, 1440p 4K, with a 12 threads 4930K ... i will like to know where was this driver overhead when my 7970 was in paar with a 770 in this game during thoses 4years.

I dont know how much drawcalls is using Pccars, but seriously.. But we are compltely off the topic, and this discussion should lie in another topic.

As i said, PCcars, could be a study on his own.. this game have been extensively tested by the community, funders, gamers, during more than 4 years, with their own setup and hardware...


Its not just the amount of draw calls, I think, its something else, if you look at RISE, the last game that was mentioned, why is it that nV cards on high and low end CPU's have the same fames, but on AMD cards the low end cpu tanks.
 
Its not just the amount of draw calls, I think, its something else, if you look at RISE, the last game that was mentioned, why is it that nV cards on high and low end CPU's have the same fames, but on AMD cards the low end cpu tanks.

For be honest, i have not look this video ... if it was a question of drawcall, in pc cars, or other games, we will speak about overhead about low end cpu's ... but in all the game mentionned, as PC cars, the performance dont lie in low end cpu, since when reviewer use low end cpu's for test, benchmark gpu's -, do you think it is due to overhead of the cpu ?

We can use an I3 dual core, or an Atom one for speak about overhead .. but how do you justify the performance on PCcars when the cpu used is a 4.5Ghz 6-8 overclocked cores from Intel ? and the performance still dive...

I have test this game during 4 years on 7970's ... never seen something like what you can see on the final release.. the only problem we had was to create a CFX profile as we had not one provided by AMD.

We have finally created one based on AFR, with some tweaks for dont got the flashing on textures .
 
Last edited:
For be honest, i have not look this video ... if it was a question of drawcall, in pc cars, or other games, we will speak about overhead about low end cpu's ... but in all the game mentionned, as PC cars, the performance are even worst with High end CPU, do you think it is due to overhead of the cpu ?

its possible that video shows similar issues with PCcars as it does with Rise. The nV cards aren't as bottlenecked by the CPU.
 
Whenever AMD is about to release an top-end GPU, benchies show up that are very good. And then it's followed by disappointment. Maybe things are different this time, we'll know soon.

But if true, then I'm at a loss about the pricing strategy: AMD has just stated that they don't want to be in the position anymore of the guy you go to when you want a lower price.
They'd have better performance, they have a GPU that looks amazing, and they have the unique story of HBM.

The crowd that buys Titan X would eat up a FuryX at $850. With a good air cooler, they could still price the Fury at $650 and it would sell amazing.

I don't get it.
 
its possible that video shows similar issues with PCcars as it does with Rise. The nV cards aren't as bottlenecked by the CPU.

SO at 1440p, 4K you should not be bottlneck by the CPU ? in reality in PC cars, the CPU have no importance at 1080p ... i have test this game during 4 years dudes, on AMD gpu's ,., i have never see performance dive, i have a damn good 4930K highly overclocked under H2o who will make look any cpu overhead disapear instantly and the performance dive so bad in the final game... The same game i have test during 4 years without any problem with the same gpu's ... hell yeah, my 2 7970's start to be really old, time to replace them.
 
Last edited:
Nvidia has like 80% advantage in 3DMark DX11 drawcalls test. Lots of debate about that at Guru3D.
Latest W10 drivers (still nowhere near release ready) improve this even on W8.1 and it shows in games right away.

I think it will become kinda moot with DX12. AMD will probably benefit even more than Nvidia.
 
Nvidia has like 80% advantage in 3DMark DX11 drawcalls test. Lots of debate about that at Guru3D.
Latest W10 drivers (still nowhere near release ready) improve this even on W8.1 and it shows in games right away.

I think it will become kinda moot with DX12. AMD will probably benefit even more than Nvidia.


Ofc, but you can pass Pccars, on GPUperfstudio, the drawcalls are far to be enough to explain anything ... I mean a game like BF4 have more drawcalls than PCcars.

But well, i sttop there, this is not the topic for this. I have give many hints on the performance problem with this game ( sorry, but im a bit bitter with that game , when i have follow the developpement of this game nearly from the start of it, when i seen the first benchmark of the official one, i was like.. what ? I was expect to see the 290X at maybe 5-7% under the 780TI, not 200% lower ...because it is unbelievable when compared with the testing we have made, Apex included , and still not patch for fix the performance on AMD GPU's, )
 
Last edited:
Back
Top