NVIDIA Maxwell Speculation Thread

VLIW is still common as it's on APUs. Hell, dual core Kaveri is missing. It makes more sense buying a 50 euro APU than a 150 euro one ; A10 6790K is another option.

So VLIW is year 2014 hardware, even if it may be antiquated.. Radeon 6000 series looks more like Radeon 2900XT than GCN, whereas Fermi looks like Kepler more than G80.

Fermi was when nvidia got serious with Tesla and stuff. GCN is AMD's similarly modern stuff.
 
No it doesn't.
DX12 API will work on all Fermi/Kepler/Maxwell, just like DX11.2 works on all DX9+ hardware which has drivers that support the API.
Fermi/Kepler/Maxwell will still be limited to D3D Feature Level 11_0 even with DX12, which may or may not bring Feature Level 12_0.

AMD hasn't said anything about which of their cards will support DX12, but most likely it'll be all of their DX11 hardware, just like NVIDIA, and they'll be most likely still limited to Feature Levels 11_0 (VLIW) and 11_1 (GCN) even if 12 brings something new (though there's the remote chance of GCN qualifying for Feature Level 12_0 if there will be such Feature Level, too, due XB1 using GCN hardware)

That is not full hardware support then.
If so, what hardware will be fully compliant with DX12, and not putting DX11 hardware working with DX12 runtime version in Windows?
 
That is not full hardware support then.
If so, what hardware will be fully compliant with DX12, and not putting DX11 hardware working with DX12 runtime version in Windows?

Either GCN, GCN 1.1 or something that isn't out yet.
We don't know if there's going to be Feature Level 12_0, if not, GCN 1.1 at least and possibly all GCN would have support for everything it has.
If there will be, there's still remote chance of GCN and/or GCN 1.1 supporting it because XB1 has GCN 1.1
 
Various sites have relayed that Raja Kadouri announced DX12 would be supported by all GCN hardware at the Microsoft event.
Given the deep parallels between the two APIs, it seems consistent.
 
Nvidia hardware supports everything on DX11.1 and DX11.2 included Tiled Resources except for the Direct2D non gaming features.

You're getting full DX12 support, it's hilarious AMD can't even say at GDC whether HD5000/HD6000 will get DX12 support or not.
 
Nvidia hardware supports everything on DX11.1 and DX11.2 included Tiled Resources except for the Direct2D non gaming features.
UAVs in all shader stages to my understanding isn't Direct2D, and is tied to D3D Feature Level 11_1, even though NV hardware is capable of it, they can't use it in DirectX because they're limited to Feature Level 11_0
 
UAVs in all shader stages to my understanding isn't Direct2D, and is tied to D3D Feature Level 11_1, even though NV hardware is capable of it, they can't use it in DirectX because they're limited to Feature Level 11_0
This feature can be supported by NVAPI.

The bigger problem is the number of UAV in the pipeline. NV only support 8, while AMD GCN and Intel Haswell support 64. Technically AMD GCN can support unlimited number of UAVs.
 
Nvidia hardware supports everything on DX11.1 and DX11.2 included Tiled Resources except for the Direct2D non gaming features.

You're getting full DX12 support, it's hilarious AMD can't even say at GDC whether HD5000/HD6000 will get DX12 support or not.

According to The Tech Report, AMD GPU's prior to HD 7xxx (is. non-GCN GPU's) will NOT support DX12.
 
According to The Tech Report, AMD GPU's prior to HD 7xxx (is. non-GCN GPU's) will NOT support DX12.

There has been some speculation that they meant GCN's support DirectX 12 "as whole", which necessarily doesn't leave VLIW DX11 cards from supporting the API - the bigger question is where do you draw the line today at what's "supporting API" and what's not, NVIDIA claims DX11.2 (and 12) support for 11_0 hardware, don't the VLIW DX11 cards support DX11.2 with 11_0, too?
 
From my reading at TechReport, isn't DX12 simply a sideways move that only tackles performance/driver overhead, but nothing else? (Not saying that a bad thing.)

If so, it's just DX11 with a lighter coat, and it should work for all GPUs that have previously supported DX11, including pre-GCN. IMO AMD simply chose to not support pre-GCN because of resource constraints, not for technical reasons. Which is a fair decision: nobody ever promised that 3 year old GPUs would be upgradable to the next DX level.
 
There has been some speculation that they meant GCN's support DirectX 12 "as whole", which necessarily doesn't leave VLIW DX11 cards from supporting the API - the bigger question is where do you draw the line today at what's "supporting API" and what's not, NVIDIA claims DX11.2 (and 12) support for 11_0 hardware, don't the VLIW DX11 cards support DX11.2 with 11_0, too?

It looks like you are trying to (intentionally?) obfuscate the issue here by talking about DX11 feature level differences which is largely irrelevant when it comes to discussion of DX12 support. DX12 is obviously a more forward-looking API than DX11, even if much existing hardware can take advantage of it. Pre-GCN DX11 GPU's from AMD do NOT support Mantle, so it is not too surprising that pre-GCN DX11 GPU's from AMD will not support DX12 (and if that was not the case, then AMD would have surely clarified by now). Like silent_guy said, this may be due to resource constraints rather than technical constraints on behalf of AMD.
 
Last edited by a moderator:
DX12 is obviously a more forward-looking API than DX11, even if much existing hardware can take advantage of it.
How did you come to that conclusion? I haven't seen any pointers about new HW functionality. (All base in just one slide unfortunately.) More CPU concurrency, lightweight driver layer etc. It all points to pure software to me.
 
How did you come to that conclusion? I haven't seen any pointers about new HW functionality. (All base in just one slide unfortunately.) More CPU concurrency, lightweight driver layer etc. It all points to pure software to me.

More forward-looking in the sense that the lower overhead will make the API more suitable for not just desktop graphics but also for notebook, tablet, and smartphone graphics. But you are right, in technical terms, the hardware requirements may not be any more strict than DX11.

On a side note, even though NVIDIA's 8xxM mobile GPU's contain a mix of Fermi [820M], Kepler, and Maxwell GPU's, they will all support DX12!
 
From my reading at TechReport, isn't DX12 simply a sideways move that only tackles performance/driver overhead, but nothing else? (Not saying that a bad thing.)

If so, it's just DX11 with a lighter coat, and it should work for all GPUs that have previously supported DX11, including pre-GCN. IMO AMD simply chose to not support pre-GCN because of resource constraints, not for technical reasons. Which is a fair decision: nobody ever promised that 3 year old GPUs would be upgradable to the next DX level.

It's not clear how the VLIW GPUs would be able to practically handle the sort of resource binding and indirection that Mantle or DX12 seem to require, but DX11 does not.
The lack of a flexible memory architecture pre-GCN may also preclude internal emulation of some things like the promised programmable blending functionality.
While potentially not ideal, a compute shader and the common read/write path can run through things even if the ROPs can't hack it.
 
It's not clear how the VLIW GPUs would be able to practically handle the sort of resource binding and indirection that Mantle or DX12 seem to require, but DX11 does not.
The lack of a flexible memory architecture pre-GCN may also preclude internal emulation of some things like the promised programmable blending functionality.
While potentially not ideal, a compute shader and the common read/write path can run through things even if the ROPs can't hack it.
But was Fermi able to do all that kind of stuff? I believe bindless textures etc. was only first introduced on Kepler?
 
I don't think Fermi can directly handle bindless extensions. Its support for more general memory addressing and its memory subsystem might allow some hackish way of following pointers to similarly laid out regions of memory within a compute kernel's execution.

I suspect having memory traffic, then a little math, then more memory traffic in a VLIW GPU is going to hit clause switch penalties, since the operations are segregated. The general read-only memory pipeline would reduce the chance at emulating things with general compute code.
AMD might also have given up trying to refactor the VLIW compiler and tool chain, finding the architecture to inflexible to provide support with enough performance to make it worthwhile.
 
5000 series is also past the usual age that ATI switches to legacy drivers. Though really we have no way to know how much development still goes into 5000-6000. They don't say much in their release notes. NV still talks about 400-500 boosts.

I have noticed a few recent bug fixes in games for my 6950 though. Earlier this year they fixed a repeatable black screen crash with Metro Last Light for example. However, considering the game is not exactly newly released, the lag time on that fix is not a good sign.

There are still a number of rendering bugs with Crysis 3 on 69xx. I finally beat that in January. Black squares with the fog shadow effect (r_fogshadows must be manually disabled). Extreme model corruption when tessellation is set to very high. I dont know if Crytek is at fault but these things work on other cards.
 
Damn, I find this a bit unnerving.. Do the games work without glitch if you simply set medium or low detail? I made a friend get an AMD A6, I seriously wonder if an Athlon 370K + geforce GT610 would have been safer (yes that CPU is not exactly great but it works, that's already something, ditto GPU) Sometimes you just want stuff to run, with no bugs, no swapping and at last 20 fps rather than 2 fps.

AMD still has to support these things, at worst some lag on games releases is not terrible as the games are full price and version 1.0.0 anyway. They do sell it afterall.
 
Damn, I find this a bit unnerving.. Do the games work without glitch if you simply set medium or low detail?
Crysis 3 is the only game I can think of that has always had bugs on my 6950. The tessellation issue isn't very important because Very High object detail is too slow on anything less than a 6950. The black squares thing is frequent but not continuous and it can be manually fixed with r_fogshadows=0. I'm not sure if one of the detail presets disables that.

My 6950 runs Thief quite well so there's that. AMD seems to be on top of things with them for the most part yet.
 
Back
Top