AMD Vega 10, Vega 11, Vega 12 and Vega 20 Rumors and Discussion

When the general advice with NVidia is to run games using D3D12 instead of D3D11, then we'll know it's not broken. Since the current, opposite, advice is so stark, then it indicates a problem.

Same as when the general advice is to buy Vega 64, instead of 1080. Or at the very least, "choose either, they're about the same".

One could argue that D3D12 on NVidia is relatively undesirable, because D3D11 is very good. Though that doesn't provide an answer for why D3D12 is generally regarded as inferior on NVidia.
Ah, ok, thanks. I thought you had hard evidence or data points that would have indicated brokenness and which I was not aware of. Hard to keep track of the whole internetzes all the time.
 
Yet they still sell cards as fast as they make them. They're so bad that demand exceeded expectations despite retailers jacking up prices. Worst launch ever with higher than expected revenue!
If you're making say, 30 cards a week, and you're selling all of them, you're selling everything as fast as you can make it, but still not making a whole lot of money.

Without a reference point (IE, actual sales numbers), talk like yours is meaningless. It looks impressive, if you don't look. But it might actually not be. That AMD is selling out now means nothing; they're largely selling to their core fans who have gone two years without a high-end GPU to buy. They need to convince NV people to buy their offerings to gain any meaningful sales traction in real, actual numbers.

Have they done so yet? No, because they're supply limited, so they can't.
 
nvidia had a lot of trouble with async on aots and aots wasnt even using much of this feature..
AotS was one of AMDs posterchilds for multi engine dispatch. How much more of it do you expect to show up in games? Don't forget, even within GCN, behaviour can be quite different, so a lot of tuning is required either by the developer or the drivers, which will probably limit it's full exploitation by default.

FWIW, multi-engine dispatch is still disabled by default for Nvidia cards in AotS:E's ini-file.

perhaps thats why they are trying to push for people to use their abstraction layer on vulkan also(and since this is nvidia we all know what they mean with abstraction layer)
I don't, could you explain?
 
When the general advice with NVidia is to run games using D3D12 instead of D3D11, then we'll know it's not broken. Since the current, opposite, advice is so stark, then it indicates a problem.

Same as when the general advice is to buy Vega 64, instead of 1080. Or at the very least, "choose either, they're about the same".

One could argue that D3D12 on NVidia is relatively undesirable, because D3D11 is very good. Though that doesn't provide an answer for why D3D12 is generally regarded as inferior on NVidia.

So when something at NV does not perform perfectly out of the box, it is broken.
When something does not work out of the box at AMD, we are waiting for AMD to unlock the hidden potential?
 
Packed math we've seen ample examples from devs on console.

Packed math being used in consoles tells jack about how they are going to be used in PC. This is a repackaged argument from the past, when NVidia was supposedly screwed because AMD had a monopoly in consoles, so all cross platform games would run better in AMD hardware. We all know how that turned out, quite the opposite with NVidia gaining market share and expanding its performance lead, particularly in perf/watt while AMD all but lost its share in discrete laptops GPUs. Its useful to remember that although PS4 Pro has packed math, Xbox One X doesn't, while the latter is the one closer to DirectX, the API mostly used in PC Games. So that AMD advantage might be quite limited to specific cases where OpenGL or Vulkan is used.
 
So when something at NV does not perform perfectly out of the box, it is broken.
When something does not work out of the box at AMD, we are waiting for AMD to unlock the hidden potential?
Yes. Nvidia never promised improvement regarding DX12 performance for Maxwell/Pascal, but AMD stated, that Primitive Shader is currently disabled in driver and HBCC is off by default. While it doesn't make sense to expect any change in DX12 performance of Maxwell/Pascal (~3,5 years since the release of maxwell is a lot of time, but nothing changed), it makes sense to expect some improvemens in geometry processing and memory management of Vega, which was released 3 weeks ago.
Without a reference point (IE, actual sales numbers), talk like yours is meaningless.
25k in 2,5 weeks.
http://www.tweaktown.com/news/58901/amd-ships-over-25-000-radeon-rx-vega-graphics-cards/index.html
 
So when something at NV does not perform perfectly out of the box, it is broken.
When something does not work out of the box at AMD, we are waiting for AMD to unlock the hidden potential?
NVidia has been failing at D3D12 since Maxwell launched as far as I can tell. Years of fail.
 
So that AMD advantage might be quite limited to specific cases where OpenGL or Vulkan is used.
It is a common misconception that Sony consoles use OpenGL or Vulkan API. Sony consoles have always used their own low level APIs.

PC DirectX (DX11 and DX12) have had standard fp16 support (min16float type) since Windows 8 launch. It is also supported on Windows 7 now. I don't see any problems in DirectX regarding fp16 support. PSSL is very similar to HSSL. Porting PS4 shaders to DirectX is trivial.
 
30% that is multiplicative with other boosts is bordering on a generational performance increase.
That 30% is only for the checkerboarding effect, not the whole frame. We've discussed this before.
https://forum.beyond3d.com/posts/1995906/
Yet they still sell cards as fast as they make them.
Yeah, when the cards are mere thousands world wide, FuryX had the same crazy "demand exceeded supply" claim, and guess how that turned out.
http://wccftech.com/amd-hbm-ramp-expected-shortages-fury-fury/
FPS figures that may change daily don't make much sense.
Nor do assertions of ridiculous gains out of thin air.
Hard to tell their thinking, but probably has to do with performance in a synthetic benchmark being somewhat easy to nail down. None of those pesky resource management issues, variable object counts, complex shapes, etc messing things up.
So your argument basically boils down to lots of ifs, maybes, perhapses, guesses and postulations, yet you tell about it like the second coming.

NVidia's D3D12 still seems to be broken. How long has that been now?
It runs most DX12 in a solid way, most games that struggle are AMD evolved titles or have broken path on both vendors. Besides, you don't see people here claim NV will gain 30% more performance once they optimized their DX12 drivers. And if they did they wouldn't be left unchecked to claim whatever baseless point they conjure.

Though that doesn't provide an answer for why D3D12 is generally regarded as inferior on NVidia.
D3D12 is generally unstable and undesirable right now, many games work worse in DX12 than DX11: Battlefield 1, Deus Ex, Quantum Break, and Warhammer. Comparing previous gen (Maxwell vs Fjij) NV cards run many DX12 games faster than AMD: Forza 6, Forza Horizon 3, Halo Wars 2, and Rise of Tomb Raider, Warhammer, Gears Of War Ultimate. and Gears Of War 4. But AMD runs others faster: Sniper Elite 4, Hitman, Ashes of Singularity, The Division, Deus Ex, and Battlefield 1. We have an entire thread dedicated to tracking down DX12 games, full of links to all sorts of tests, you are welcome to check it out.

To sum it up: The situation is a mess in D3D12, though it's a tie at worst, It gets even more messier if you involve Pascal vs Vega, as some games that were slower becomes faster, or vice versa. The jury is still out on this one.
 
It is a common misconception that Sony consoles use OpenGL or Vulkan API. Sony consoles have always used their own low level APIs.

PC DirectX (DX11 and DX12) have had standard fp16 support (min16float type) since Windows 8 launch. It is also supported on Windows 7 now. I don't see any problems in DirectX regarding fp16 support. PSSL is very similar to HSSL. Porting PS4 shaders to DirectX is trivial.

Yes, my statement was not about technical capability for FP16 per se. It was about the Packed Math in consoles which is only available on PS4. The AMD GPU on Xbox One X does not support it, so its not like this Vega advantage translates directly to PC in the same way. Unless Packed Math is completely worked on by the driver, not requiring any input from the developer that is (he just needs to define the property as FP16).
 
Last edited:
RPM being available only in AMDs top-of-the-line line of cards with it's market penetration implications would not help rapid adoption of this feature, would it? They removed FP16-support from their most recent drivers for all but Vega.
 
About FP16, anandtech noted it was not available on OpenCL yet, and concerning directx "However based on some other testing, I suspect that native FP16 support may only be enabled/working for compute shaders at this time, and not for pixel shaders. In which case AMD may still have some work to do. "

http://www.anandtech.com/show/11717/the-amd-radeon-rx-vega-64-and-56-review/4

So I guess even that is still no fully activated...
They also disabled FP16 support on GCN3 and Polaris...
 
When the general advice with NVidia is to run games using D3D12 instead of D3D11, then we'll know it's not broken. Since the current, opposite, advice is so stark, then it indicates a problem.
Given the (still growing) installed base of DX11 games out there, having a non-"broken" D3D11 product seems to be preferable over the alternative. ;)

NVidia has been failing at D3D12 since Maxwell launched as far as I can tell. Years of fail.
So that's 3 years. But when was DX12 really available to the public? Sometime 2015? So that's 2 years of "fail". Meanwhile, there's articles like this: http://www.techradar.com/news/the-forgotten-api-just-what-is-going-on-with-dx12

Now replace "Nvidia" with "AMD" and "D12" with "D11" in your sentence. Few will argue that, since the introduction of Maxwell, AMD hasn't been doing as well as Nvidia in the GPU space. By your DX12 standard, that's failing as well, and with some of their chips, the failure is quite spectacular.

Now which of the two failing cases is worse...
 
Given the (still growing) installed base of DX11 games out there, having a non-"broken" D3D11 product seems to be preferable over the alternative. ;)
I agree, not sure why you would think otherwise. Most games use low-tech graphics.

So that's 3 years. But when was DX12 really available to the public? Sometime 2015? So that's 2 years of "fail". Meanwhile, there's articles like this: http://www.techradar.com/news/the-forgotten-api-just-what-is-going-on-with-dx12
Not sure why you quoted the article when it has nothing meaningful to contribute on this subject. The subject is "NVidia's long term failure to get D3D12 working well on its D3D12 GPUs".

Now replace "Nvidia" with "AMD" and "D12" with "D11" in your sentence. Few will argue that, since the introduction of Maxwell, AMD hasn't been doing as well as Nvidia in the GPU space. By your DX12 standard, that's failing as well, and with some of their chips, the failure is quite spectacular.

Now which of the two failing cases is worse...
I'm mystified how AMD thinks it's going to survive in gaming graphics over the next 3 years, when it appears to be about 2 years behind now and likely to add another year to its deficit within the next 9 months. I think Vega is the final nail in the coffin. Maybe AMD will surprise me.

But that doesn't prevent me making the observation that all is not rosy in NVidia's drivers with high profile, highly demanding games, some of which have D3D12 API usage.

The "NVidia's drivers are always excellent from day one" meme doesn't belong around here. Yet for some reason it's tolerated.

So when we discuss how long it might take for AMD to sort out its drivers for Vega, we need to remember that NVidia has struggled to sort out its driver for Pascal D3D12. Maxwell D3D12 is such a tedious subject I'm not sure anyone can be bothered with it.

Vega's driver appears to be at a new low for launch quality and it's just bizarre that something that's been up and running for 9+ months is this bad. It seems AMD has just decided not to bother any more: there has been no deep dive, there's no demonstration code, there's no meaningful engagement with journalists and reviewers were given the minimum time to test. Apart from anything else, when you give reviewers such little time, it prevents you from answering their questions. How much more disengaged can AMD be?
 
Back
Top