AMD: Speculation, Rumors, and Discussion (Archive)

Status
Not open for further replies.
Because if that's the case, it's hard not to get the Nvidia card since they generally have way better past-proofing with DX11 and they won't be behind on future-proofing with DX12.

I don't know why there should be any doubt as to what card you should purchase within the next few months.
If your budget is $100-$300, you should wait and see what AMD has to offer with Polaris. If your budget is above $350 ($370 more precisely), then the Pascal cards are a no-brainer.

Unless you're thinking of a multi-GPU setup, there isn't a hard decision to be made. The new FinFet offers from both IHVs won't overlap during the next few months.





But if you're talking about pure future-proofing then I suggest you read this thread.
For the last 4 years, AMD GPUs have aged substantially better than Kepler in DX11 and Maxwell in DX12. Whether Pascal will turn this around or not, depends on how successful Gameworks is and how much you want to believe nvidia's PR.

If you're wondering if Async is or will be very important or not in the upcoming DX12/Vulkan games, this should answer your question.
Devs from Oxide (AoTS), idSoftware (Doom), Q-Games (Tomorrow Children) and Ubisoft Montreal (Far Cry Primal) all agree that Async gave them a big boost in games performance.
That should be worth a lot more than many arm-chair expert rambling you see in forums (though B3D in particular has some actual experts here e.g. sebbbi).
 
Last edited by a moderator:
When it comes to hardware AMD has almost always been superior quality, sometimes by a wide margin, not necessarily translating into better performance, but much easier to work with because orthogonal and sensible.
I can't say the same about NV hardware, I wrote a lot of workaround because NV hardware is the minimum to get the right marketing tickboxes. (Pixel Shaders w/o constant registers meaning the code had to be patched leading to disastrous performance, you didn't notice it in games because we had to work around it, wasted time for us ; GS being more than slow on NV hardware ; and so on...)
 
I don't know why there should be any doubt as to what card you should purchase within the next few months.
If your budget is $100-$300, you should wait and see what AMD has to offer with Polaris. If your budget is above $350 ($370 more precisely), then the Pascal cards are a no-brainer.

Unless you're thinking of a multi-GPU setup, there isn't a hard decision to be made. The new FinFet offers from both IHVs won't overlap during the next few months.

Oh they will over lap, I'm pretty sure it will happen with the next 2 months too.

But if you're talking about pure future-proofing then I suggest you read this thread.
For the last 4 years, AMD GPUs have aged substantially better than Kepler in DX11 and Maxwell in DX12. Whether Pascal will turn this around or not, depends on how successful Gameworks is and how much you want to believe nvidia's PR.

If you're wondering if Async is or will be very important or not in the upcoming DX12/Vulkan games, this should answer your question.
Devs from Oxide (AoTS), idSoftware (Doom), Q-Games (Tomorrow Children) and Ubisoft Montreal (Far Cry Primal) all agree that Async gave them a big boost in games performance.
That should be worth a lot more than many arm-chair expert rambling you see in forums (though B3D in particular has some actual experts here e.g. sebbbi).

Gameworks, won't have much to do with for the next few months because AMD has no response in the performance market, and nV will have a response for the mainstream segment within a couple of months of AMD's release of mainstream.

Also you aren't taking into consideration that compute amounts in games have been increased, Pascal remedied this quite a bit with its compute capabilities by increasing clock speed and keeping ALU counts at reasonable levels relative to previous gens. Lets see what AMD can do in this regard, as they have reached pretty high with Fiji with peak theoretical compute output, they might not be able to increase their compute amounts to the same degree % wise as nV has. Seems like P10 will give us a good idea of what they will be doing with clock speeds and ALU amounts and even clock speeds with Vega.

nV has always been better to market cards that fit with the current games that are out at a specific time. AMD's forward looking architectures is always in hind sight. While its good it actually hurts them at the end because its never seen at the moment. Don't forget about the present and think of the future........ If you are going to push the "in the future" angle, better push the present too.
 
You can easily turn this argument around. There is also some inherent (hardware) "overhead" for creating an architecture providing a high performance with fewer threads. And as SMT/hyperthreading provides generally a performance uplift for throughput tasks, it is basically always preferable from a performance perspective, even on design A delivering a higher performance with a low number of threads than another design B. It's basically independent. If the integration is worth the effort on design A or design B, is another question. It is basically a similar question as, if the effort for implementing changes to design B so it gets the performance characteristics with a low amount of threads of design A is worth it ;).
Of course - and that's basically the whole answer to all this AC debate: It is not something everyone needs equally badly. For some (architectures), it is totally worth it, for others, it just isn't. And maybe there even is some kind of middle ground.

Who knows, maybe in the future, there are Nvidia architectures which will greatly benefit from this - much more than Maxwell and Pascal. We have seen Nvidia altering their approaches already. From the very autonomous and highly efficient (from an area and compiler point of view) units in Fermi to their complete opposites in Kepler to something more of a middle groundish things in Maxwell and Pascal-04, whereas Pascal-00 seemingly more resembles balance in GCN.
 
So, has a review date for these cards been announced?
rumors says AMD is so afraid of being ridiculous against Pascal that no review will hit the web before any card will be available on the shelves...

...or not
 
Last edited:
But if you're talking about pure future-proofing then I suggest you read this thread.
For the last 4 years, AMD GPUs have aged substantially better than Kepler in DX11 and Maxwell in DX12
GCN architecture has nothing to do with future-proofing. If Keplers were in consoles, they would have been as "futureproof" as GCN now for very obvious reasons. Unfortunately, but in fact Kepler demos shown in 2012-2013 still look better than anything on consoles, they show some real future-proofing.
 
Last edited:
GCN architecture has nothing to do with future-proofing. If Keplers were in consoles, they would have been as "futureproof" as GCN now for very obvious reasons. Unfortunately, but in fact Kepler demos shown in 2012-2013 still look better than anything on consoles, they show some real future-proofing.
GCN was futureproofing to begin with, they packed the thing with units that were completely useless on the APIs of the day, they had a reason to believe they will be useful during the cards lifetime
 
they packed the thing with units that were completely useless on the APIs of the day
Most of those features were requested by Sony and their developers, so there is no such thing as believe. If AMD simply relied upon believe, they would certainly be screwed now. Anyway, Kepler has its own streights as well
 
Most of those features were requested by Sony and their developers, so there is no such thing as believe. If AMD simply relied upon believe, they would certainly be screwed now. Anyway, Kepler has its own streights as well
I'm pretty sure Sony wasn't involved in original GCN design, but just involved in GCN 1.1 updates, just like MS
 
its easy to see there is no link, how long has the Xbox one and PS4 been out? Since their launch what has nV's and AMD's marketshare been like.....

Its been a down hill fall for AMD.

We have heard the same story when nV got the original Xbox contract and yeah the FX series came out soon after lol, guess what, PC market spoke, they don't give a crap about what the consoles have in them as hardware.
 
No idea where the market share numbers I saw a week or two ago went... seems like they got buried.

Anyways, AMD has been making gains in market share but more to the point, they are making gains on comparative Nvidia products.

So saying GCN hasn't helped them at all in the PC market is patently false.
 
No idea where the market share numbers I saw a week or two ago went... seems like they got buried.

Anyways, AMD has been making gains in market share but more to the point, they are making gains on comparative Nvidia products.

So saying GCN hasn't helped them at all in the PC market is patently false.


Since GCN's second generation cards launch, AMD has lost over half of its marketshare, while the launch of the PS4 and Xbox, the GCN marketshare remained flat ~40% Only 1 Q after the Xbox One and PS4, pretty much one Q before the 9 series were launch by nV, AMD started losing marketshare and has been a steady decline ever since last 2 q's here AMD regained 4 % marketshare, this has more to do with just having some products that are compelling to buy vs, products that weren't before and its very limited.

What we have seen in the past 2 Q's has been the market balancing itself of having AMD products in the supply lines, nothing more, there has been no magical increase by consoles, no price wars, nothing to push OEM's, system builders, retailers, end consumers to go out and purchase AMD products outside of the products being there.

I'm leaving notebooks marketshares out the equation because we know how fast notebook marketshare swings happen, we have seen 20% and more in a quarter because this is all OEM's, so if a company has a good product that is better than the competition in perf/watt, they will automatically get that marketshare if they are willing to go for the price OEM's are looking for.
 
Last edited:
Most of those features were requested by Sony and their developers, so there is no such thing as believe. If AMD simply relied upon believe, they would certainly be screwed now. Anyway, Kepler has its own streights as well
Async compute was implemented prior to GCN. It took the consoles and DX12 to make it broadly useful.

Debating if consoles improved AMD's marketshare is pointless. There's no way to prove either way.
 
dd936b600c3387444ea46eb3560fd9f9d52aa0d7_zpsdnihkind.png


The C7 is only ~10% slower than this Fury series card at 1440p extreme with similar hardware.(from another AMD tester and only the game version is different, but the results were the same for a previous game version)

http://www.ashesofthesingularity.co...-details/10ff69fc-86dc-4fed-b03f-82a7ae10f2f0

It's only slightly slower than this 390 series card but which is rendering at 1080p extreme.

http://www.ashesofthesingularity.co...-details/c9c7cc8a-6263-452c-b252-8f2db12a894f

And a Fury series card is 27% faster than the above 390S result at 1080p extreme and 18% faster than its 1440p result.

http://www.ashesofthesingularity.co...-details/857b2b5a-bde9-46e8-9b63-6c32fa0722b1

I think it's pretty likely the above scores are at stock settings and somewhat likely that the cards are full chip versions. So quite likely that it's in the 390X-Fury range of performance.
 
Status
Not open for further replies.
Back
Top