AMD: Speculation, Rumors, and Discussion (Archive)

Status
Not open for further replies.
If the numbers here are to be believed https://www.computerbase.de/2016-07...md-nvidia/#diagramm-doom-mit-vulkan-2560-1440 then 980ti which has 1.1 the TFlops of 390 is about 1.13 times faster in 1080p and 1.1 times faster in 1440p. However, Fury X is only 1.43 times a 390 and 1.3 times a 980ti rather than 1.68x and 1.52x, respectively.


So looking at the results from Computerbase and Gamegpu and the fact that the difference between both sites seem to be really small (up to 6% from SMAA to TSSAA on nvidia cards), it seems the Fury X is actually performing the same as a GTX 1080 in both 1080p and 1440p, if Async Compute is being used.




At 4K, the GTX 1080 goes up to 56 FPS whereas the Fury X goes down to 45 FPS.

FP32 output is basically the same on both the GTX 1080 and the Fury X, so it seems the cards are compute-limited up to 1440p.
At 4K, it seems the Fury X is either memory-limited or fillrate-limited, though I'm more inclined to the later because in Ultra Quality won't use more than 4GB:

 
or at least inflicting some of the software immaturity and weak optimization penalties on competing silicon that had gained an insurmountable lead on the old APIs.
Inflicting damage alright, if Doom is to be the prime example of Vulkan then I don't see it gaining traction with any other vendor. Why would anyone pick it up just to see performance tank on their current and/or previous architectures? I mean what is the benefit of it really out side AMD? even within AMD, it would suffer Mantle's fate if and when AMD decides to abandon GCN.
 
Inflicting damage alright, if Doom is to be the prime example of Vulkan then I don't see it gaining traction with any other vendor. Why would anyone pick it up just to see performance tank on their current and/or previous architectures? I mean what is the benefit of it really out side AMD? even within AMD, it would suffer Mantle's fate if and when AMD decides to abandon GCN.

I'd characterize Doom as being one of the first examples, rather than the defining one.
The risk in refusing to engage on the lower-level API is if it manages to catch on enough, and the competition is able to match or exceed the mature high-maintenance method. The barriers of entry or treading water in high-end graphics on that account are massive.

The risks of performance regression on older hardware are not unprecedented. They can happen sporadically with every driver release already.
It will be interesting to see what happens after the honeymoon period, where a large amount of legacy cruft has been jettisoned but not enough time has been given to see what form of it might accumulate. What it takes to keep performance and stability long-term, and what constraints architectures place on them if they do expose low-level features with intrinsics are unclear, but there could be enough benefit and momentum to make those the intractable problems for which a new generation of interfaces has to find a solution.
 
Inflicting damage alright, if Doom is to be the prime example of Vulkan then I don't see it gaining traction with any other vendor. Why would anyone pick it up just to see performance tank on their current and/or previous architectures? I mean what is the benefit of it really out side AMD? even within AMD, it would suffer Mantle's fate if and when AMD decides to abandon GCN.

You have ~60 million present consoles and their 2017 follow-ups all confirmed to be using GCN but you think developers should abstain from using all performance-enhancing features available to them because AMD may eventually someday in the future switch architectures in the PC space.

I don't even..
 
Yes it's a pretty straight forward logical equation. When high level optimizations beat low level optimizations in a certain hardware, it means something is wrong with the low level optimizations or it's just a facade. Do you have another explanation for this? Please enlighten us.

And Yes causing havoc meaning degrading performance without increasing image quality in the slightest. DX10 was famous for this, it ended up despised, ignored and was quickly replaced with DX10.1 and then DX11.

What havoc are you talking about? There's not a single DX12/Vulkan benchmark out there, where Nvidia loses any significant amount of performance that's outside margin of error. In every benchmark out there made by a serious site, Nvidia GAINS performance in DOOM/Vulkan compared to OGL. It may not be more than 8% on average but it does, period. At least on Pascal.
 
Are the X-AMD flop cards performing closer to the X-nV flop cards in the Vulkan/Doom comparison?

e.g. is a 6TF AMD GPU performing what you'd see a 6TF nV GPU do?

They are pretty much exactly the same perf/flop wise. Fiji being an exception.

At least based on computerbase charts.
 
Last edited:
It would be nice if people would stop conflating async compute with instruction scheduling. How many times has to be explained/repeated that they are not related?!? If you think they are you still don't get what async compute is.

Also stop this FUD about Pascal not having a HW task scheduler. It's really getting boring at this point. The new 3DMark DX12 benchmark shows Pascal getting 5-6% better performance with async compute enabled. Not that I would be surprised to read again and again that Pascal doesn't support it.

Now I am waiting for those saying "but GCN gains more with async compute" like it's a feature, while it *might* only be showing how GCN struggles more to get gfx work scheduled efficiently (we don't know since we don't have enough data).
 
It would be nice if people would stop conflating async compute with instruction scheduling. How many times has to be explained/repeated that they are not related?!? If you think they are you still don't get what async compute is.

Also stop this FUD about Pascal not having a HW task scheduler. It's really getting boring at this point. The new 3DMark DX12 benchmark shows Pascal getting 5-6% better performance with async compute enabled. Not that I would be surprised to read again and again that Pascal doesn't support it.

Now I am waiting for those saying "but GCN gains more with async compute" like it's a feature, while it *might* only be showing how GCN struggles more to get gfx work scheduled efficiently (we don't know since we don't have enough data).

Are you also waiting for all those Maxwell apologists here to apologize for claiming it can do async compute but we just have to wait for driver update, also viciously attacking AotS developers, claiming they are incompetent and in bed with AMD?

Now that we have another confirmation anything below Pascal is a dud when it comes to new APIs I hope I never see anyone recommending 970 or 980 instead of RX480. Or anything other than Pascal for that matter.
 
Are you also waiting for all those Maxwell apologists here to apologize for claiming it can do async compute but we just have to wait for driver update, also viciously attacking AotS developers, claiming they are incompetent and in bed with AMD?

Now that we have another confirmation anything below Pascal is a dud when it comes to new APIs I hope I never see anyone recommending 970 or 980 instead of RX480. Or anything other than Pascal for that matter.


It can do it just not on the SMX effectively after the first partitioning, and this was never told to anyone outside of nV till Pascal's launch, all we could see is that something was messing up after the first static (which at the time no one knew) partitioning of the SMX's.

And you guys are reading the marketing material from AMD which was completely wrong with any assertion of opposing products and even their products as they used different terms that just confused the f out of everyone that didn't really know what was going on and that confusion is still here, despite the many people that have put into into layman's terms time and time again.
 
Last edited:
Are you also waiting for all those Maxwell apologists here to apologize for claiming it can do async compute but we just have to wait for driver update, also viciously attacking AotS developers, claiming they are incompetent and in bed with AMD?
At least initially, I recall statements to the effect that there was a vendor check that was overriding any attempt to turn on asynchronous compute for Nvidia. At the time, without a Pascal board to test the check remained. I'm curious if that has changed with Pascal now being in more general release.
 
It can do it just not on the SMX effectively after the first partitioning, and this was never told to anyone outside of nV till Pascal's launch, all we could see is that something was messing up after the first static (which at the time no one knew) partitioning of the SMX's.

And you guys are reading the marketing material from AMD which was completely wrong with any assertion of opposing products and even their products as they used different terms that just confused the f out of everyone that didn't really know what was going on and that confusion is still here, dispute the many people that have put into into layman's terms time and time again.

What I meant by can do is gain anything with it. We now know it can't. More than enough evidence. Pascal can. So please let's drop it.
 
So looking at the results from Computerbase and Gamegpu and the fact that the difference between both sites seem to be really small (up to 6% from SMAA to TSSAA on nvidia cards), it seems the Fury X is actually performing the same as a GTX 1080 in both 1080p and 1440p, if Async Compute is being used.
Great! So when is the patch that will actually enable async compute on NV hardware due from Id?
 
Are you also waiting for all those Maxwell apologists here to apologize for claiming it can do async compute but we just have to wait for driver update, also viciously attacking AotS developers, claiming they are incompetent and in bed with AMD?

Now that we have another confirmation anything below Pascal is a dud when it comes to new APIs I hope I never see anyone recommending 970 or 980 instead of RX480. Or anything other than Pascal for that matter.
So I am making a technical argument and you reply with that? Please put me on your ignored list so you can save time and effort.
 
I get the impression that some people around here are probably more worried than IHVs themselves. I'm quite sure that nVIDIA will find a way to succeed like they always did, while this is a good opportunity for AMD/ATI to get back on its feet and fight. If anything, this is all a good sign for the GPU and computer graphics market. For the last what, three generations (?) GPU launches have been anything but exciting. Novelty is back again and we should all be happy with it, regardless of which IHV we are fans of. Proof of this is that I think we haven't seen such heated debate on these forums since Fermi times. Please forgive me for this somewhat pointless post, but maybe we can get a less bitter (and personal) perspective in here, focused more on what this may bring to us as consumers and enthusiasts and less on short term consequences for our preferred brand?
 
In every benchmark out there made by a serious site, Nvidia GAINS performance in DOOM/Vulkan compared to OGL. It may not be more than 8% on average but it does, period. At least on Pascal.
Currently it seems so, but that will most likely change once DOOM developers figure out the issues surrounding Nvidia hardware using Vulkan.
 
I have avoided weighing in on this, as it is off-topic, but I think the quality of discourse has degraded enough.

It's always something of a fraught exercise to make the topic of a statement a "you" as in making the focus of questioning the person behind a post rather than a claim or technical concept. People can be defensive, but there are narrow cases where it might be relevant.
Even less useful is the attempt at psychoanalysis of posters rather than discussion of claims or ideas.
Pointless would be trying to psychoanalyze amorphous groups of people that said target is asserted to be a part of.

Useful examples of "you" work like this:
This example you cite does not work.
Your claim is not supported by the the evidence.

It is the uncommon case that "you" (singular, or plural when asking someone to represent a "them") or "anything about you, really" is all that relevant or does anything to elevate the discourse.
If it weren't a kind of crap posting that probably should be locked, I am sorely tempted to create a thread titled "That thing you did or did not do", where people can take their personal outrage about one another to.

Or other various forums whose utility in serving as things I can browse and roll my eyes at without doing it here is notable.
 
It would be nice if people would stop conflating async compute with instruction scheduling. How many times has to be explained/repeated that they are not related?!? If you think they are you still don't get what async compute is.

Also stop this FUD about Pascal not having a HW task scheduler. It's really getting boring at this point. The new 3DMark DX12 benchmark shows Pascal getting 5-6% better performance with async compute enabled. Not that I would be surprised to read again and again that Pascal doesn't support it.

Now I am waiting for those saying "but GCN gains more with async compute" like it's a feature, while it *might* only be showing how GCN struggles more to get gfx work scheduled efficiently (we don't know since we don't have enough data).

As more DX12 and Vulkan apps roll out the more I'm convinced that AMD's hardware has simply been grossly underutilized for many years due to driver and API overhead. The results we're seeing on the new APIs today is essentially what we should expect to see based on theoretical performance vs competing nVidia parts.

On the other hand we can't really draw any conclusions about nVidia hardware with the information available so far. Async support aside, one key question remains - does nVidia hardware even have anywhere near the untapped potential that we're seeing from AMD?
 
Currently it seems so, but that will most likely change once DOOM developers figure out the issues surrounding Nvidia hardware using Vulkan.

That's assuming there is a problem in the first place. Expect a max 10% bump in Pascal performance when they enable and fine tune async compute. Expect 0% bump on anything older.

Nvidia has been working with ID on Vulkan for quite some time now. Likely just as long as AMD. It was even a showcase for Pascal launch. Now we are suppose to believe they are all of a sudden so incompetent they need twice as long to figure out their s*? Pardon my French.
 
Status
Not open for further replies.
Back
Top