DirectX 12: The future of it within the console gaming space (specifically the XB1)

Having finally done some long overdue research into feature levels its now pretty clear that that AMD slide is just clever marketing speak. To be clear, Maxwell 2 is already the first GPU to be fully DX12 compliant, that is to say, it supports feature level 12_1 (the highest feature level). There are then some option features which are not required for a GPU to be considered a "full dx12 implementation". One of those features is resource binding tier 3 - which Fiji will support. This makes it the first GPU with a "full DX12 implementation", i.e. feature level 12_1 (which Maxwell 2 also has) in addition to resource binding tier 3.

So I believe AMD are merely using clever wording to make Fiji (GCN 1.3) sound a bit more impressive from a feature level point of view than it actually is.

As for GCN 1.1 (Hawaii and probably the XBO) and GCN 1.2 (Tonga), I'm not 100% certain but it looks like they support feature level 12_0 at best (my uncertainty is whether they are that high or lower) so arguably neither console or any currently released AMD GPU can be said to fully support DX12 at the feature level. They will of course support the API itself though.
 
Last edited:
Yep .. its XB1 .. as the main man Phil himself said
Yep .. its XB1 .. as the main man Phil himself said
I agree. Phil Spencer said they were fully aware of DirectX 12 when they built the Xbox One, so it is effectively the first DirectX 12 device:

http://n4g.com/news/1659528/phil-spencer-we-knew-about-dx12-when-we-built-xbox-one

Having finally done some long overdue research into feature levels its now pretty clear that that AMD slide is just clever marketing speak. To be clear, Maxwell 2 is already the first GPU to be fully DX12 compliant, that is to say, it supports feature level 12_1 (the highest feature level). There are then some option features which are not required for a GPU to be considered a "full dx12 implementation". One of those features is resource binding tier 3 - which Fiji will support. This makes it the first GPU with a "full DX12 implementation", i.e. feature level 12_1 (which Maxwell 2 also has) in addition to resource binding tier 3.

So I believe AMD are merely using clever wording to make Fiji (GCN 1.3) sound a bit more impressive from a feature level point of view than it actually is.

As for GCN 1.1 (Hawaii and probably the XBO) and GCN 1.2 (Tonga), I'm not 100% certain but it looks like they support feature level 12_0 at best (my uncertainty is.whether they are that high or lower) so arguably neither console or any currently AMD GPU can be said to fully support DX12 at the feature level. They will of course support the API itself though.
Nice research.. It mostly applies to the PC because the Xbox One is the first DirectX 12 device in the market, which is even more clear after Mike Ybarra said that only full DX12 GPUs will get all the benefits of DirectX 12 --although it will work with any modern GPU on the market.

Someone asked him on Twitter. "To get the full support of DX12 will users need to get a new graphics card?"

In order to get the "full benefits of DX12," Ybarra replied, "the answer is yes."

http://www.pcworld.com/article/2873...nt-require-a-new-graphics-card-after-all.html
 
the Xbox One is the first DirectX 12 device in the market, which is even more clear after Mike Ybarra said that only full DX12 GPUs will get all the benefits of DirectX 12 --although it will work with any modern GPU on the market.

Someone asked him on Twitter. "To get the full support of DX12 will users need to get a new graphics card?"

In order to get the "full benefits of DX12," Ybarra replied, "the answer is yes."

http://www.pcworld.com/article/2873545/dont-panic-directx12-wont-require-a-new-graphics-card-after-all.html

I'm afraid I don't follow the logic of that. All Phil says is "Some DX12 features are already in XBOX One but full DX12 coming."

In order to fully support the DX12 API, your GPU must be complient with feature level 11_0. GCN1.0 is already feature level 11_1 (and significantly pre-dates the XBO itself) so there is no doubt that the XBO will fully support the API itself.

With regards to "some DX12 features are already in XBOX One" that to me could mean the console does support some of the mandatory hardware features that form part of either FL12_0 or FL12_1 but as yet, the "full DX12" API itself is not present on the console. So his statement could merely mean that "yep, we support some of the new hardware features that DX12 brings (and some are even exposed now through our existing API) but the full DX12 API (with all it's new threading and efficiency improvements) is yet to come to the XBOX One" I don't think he's necessarily saying that the full FL12_1 feature set currently exists in the XBO and is going to be exposed by DX12.

EDIT: just to add to that, it's already known that the XBO does support at least one of the DX12 FL12_0 features (and in fact goes beyond it) which is resource binding Tier 3. All GCN GPU's support that feature so there is clearly grounds already for Phil to state "some DX12 features are already in XBOX One".

To be clear, no AMD GPU currently sports the full 12_1 featureset and that includes Tonga. Why would Tonga, as a significantly newer vesion of GCN than the one in XBO not include those features while the older XBO APU does? It doesn't seem to make much sense to me.

In order for the XBO to be a "full DX12 implementation", i.e. support the highest DX12 feature level 12_1, it would need to support Conservative Rasteration (Tier 1) and Raster Ordered Views. Is there any evidence at all that it does support those features?
 
Last edited:
Unfortunately no, we are still waiting to confirm that. I've been trying to figure out this piece with no avail. Been looking so long for this info as soon as I read dx12, feature level and SOC, my mind only sees Xbox lol. Bad pattern bias, like seeing the face on Mars.

If we don't find out soon, I am going to walk around Build conference looking for the Xbox team until I get an answer lol.

IIRC there was supposed to be some pod cast that interviewed the Xbox team on how dx12 would affect xbo. I suppose that never happened. Or they are still under NDA which IIRC they tweeted they needed to sign prior to the interview occurring. I just don't get it.
 
Last edited:
This spreadsheet from GAF. I'm not sure if accurate but wow amazing if true.

http://m.neogaf.com/showpost.php?p=156266935

He/They was/were specific on hardware versions so I'm a bit confused cause if this is true then the first fully implemented dx12 GPU would be well, HD Radeon 7790.
 
Last edited:
I thought it was already confirmed that GCN 1.0-1.2 don't support ROV or Conservative Rasterization. That would seem to be supported by AMD's Fiji slide which states it's the first GPU with a full DX12 implementation + Tier 3 resource binding.

Since all GCN GPU's feature Tier 3 resource binding that must mean that Fiji is the first AMD GPU to support the rest of the feature level 12_1 set. Which doesn't tally with that table.
 
That's pretty much what I felt as well. My check was that not all Sea Islands, or Southern Islands support Tier 2 Tiled Resources - from my understanding only the 7790 does in that family. Followed afterwards by Radeon R7 260X (Bonnaire 1.1) , R9 290x (Hawaii 1.1) R9 285 (Tonga 1.2)

So, they can't all be the same feature set.
 
I thought it was already confirmed that GCN 1.0-1.2 don't support ROV or Conservative Rasterization. That would seem to be supported by AMD's Fiji slide which states it's the first GPU with a full DX12 implementation + Tier 3 resource binding.

Since all GCN GPU's feature Tier 3 resource binding that must mean that Fiji is the first AMD GPU to support the rest of the feature level 12_1 set. Which doesn't tally with that table.

Even the first GCN uses manual interpolation as opposed to fixed function interpolation in other architectures, so conservative rasterization shouldn't be a problem with new drivers.

As for ROVs, it's the same as Intel Pixelsync, and GCN already supports that in OpenGL.
 
Even the first GCN uses manual interpolation as opposed to fixed function interpolation in other architectures, so conservative rasterization shouldn't be a problem with new drivers.

As for ROVs, it's the same as Intel Pixelsync, and GCN already supports that in OpenGL.

ROV are slow on GCN?
 
Even the first GCN uses manual interpolation as opposed to fixed function interpolation in other architectures, so conservative rasterization shouldn't be a problem with new drivers.

As for ROVs, it's the same as Intel Pixelsync, and GCN already supports that in OpenGL.
Ah welcome to B3D.

S.I, C.I, And V.I in your spreadsheet, are you sure about going as far back Sea/Southern Islands?
 
As for ROVs, it's the same as Intel Pixelsync, and GCN already supports that in OpenGL.

That support is not the same as Intel's pixelsync. I'm not sure why people keep saying this is just a software feature, but it's not clear ROVs can be implemented practically without some hardware assistance. From what I understand, rov on gcn is not fast (at least not "fast" like it is for Intel).
 
Resource Binding Tiers

Maximum Values Tier 1 Tier 2 Tier 3
CBV/SRV/UAV Heap Size: 2^16 -- 2^20 -- 2^20+
CBVs per stage: 14 -- 14 -- full heap
SRVs per stage: 128 -- full heap -- full heap
UAVs across all stages: 8-64 -- 64 -- full heap
Samplers per stage: 16 -- full heap -- full heap
SRV Descriptor Tables: 5 -- 5 -- no limit


https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=4&cad=rja&uact=8&ved=0CDcQFjAD&url=https://intel.lanyonevents.com/sf14/connect/fileDownload/session/1B0D11C9796E33739827241E4DD51A76/SF14_GVCS005_101f.pdf&ei=t2wIVbcWhpHIBIv_gLgJ&usg=AFQjCNGNXtihvlRK6zZMHI9RecqC6uk90Q&sig2=-RIN43mGXBOxKlraduDvYQ&bvm=bv.88198703,d.aWw
 
That support is not the same as Intel's pixelsync. I'm not sure why people keep saying this is just a software feature, but it's not clear ROVs can be implemented practically without some hardware assistance. From what I understand, rov on gcn is not fast (at least not "fast" like it is for Intel).

In it's current specification in DX12 it is exactly the same as Pixelsync, that's all we know right now. And the fact that GCN supports that extension in OpenGL. I'm not suggesting anything else :)
 
Even the first GCN uses manual interpolation as opposed to fixed function interpolation in other architectures, so conservative rasterization shouldn't be a problem with new drivers.

As for ROVs, it's the same as Intel Pixelsync, and GCN already supports that in OpenGL.

But then why would AMD themselves claim in the leaked slide (assuming it's real) that Fiji is their first full DX12 implementation?
 
Ah welcome to B3D.

S.I, C.I, And V.I in your spreadsheet, are you sure about going as far back Sea/Southern Islands?

Hey, thanks!

Well, nothing is sure at the moment. The contents of that table is based on some insider information, the current (unfinished) DX12 specs, and our knowledge of the GCN hardware. It should be correct in the end, but I guess we will only know for sure when DX12 is released.
 
Resource Binding Tiers

Maximum Values Tier 1 Tier 2 Tier 3
CBV/SRV/UAV Heap Size: 2^16 -- 2^20 -- 2^20+
CBVs per stage: 14 -- 14 -- full heap
SRVs per stage: 128 -- full heap -- full heap
UAVs across all stages: 8-64 -- 64 -- full heap
Samplers per stage: 16 -- full heap -- full heap
SRV Descriptor Tables: 5 -- 5 -- no limit


https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=4&cad=rja&uact=8&ved=0CDcQFjAD&url=https://intel.lanyonevents.com/sf14/connect/fileDownload/session/1B0D11C9796E33739827241E4DD51A76/SF14_GVCS005_101f.pdf&ei=t2wIVbcWhpHIBIv_gLgJ&usg=AFQjCNGNXtihvlRK6zZMHI9RecqC6uk90Q&sig2=-RIN43mGXBOxKlraduDvYQ&bvm=bv.88198703,d.aWw

Yeah, this part is final in the DX12 spec, so it's for sure that even the first GCN is tier 3, and Maxwell2 is tier 2.
 
Hey, thanks!

Well, nothing is sure at the moment. The contents of that table is based on some insider information, the current (unfinished) DX12 specs, and our knowledge of the GCN hardware. It should be correct in the end, but I guess we will only know for sure when DX12 is released.

Interesting!

Earlier we had a similar discussion, @Andrew Lauritzen weighed in his thoughts on GCN and DX12 feature set here:
https://forum.beyond3d.com/posts/1823113/

Sebbbi follows up shortly afterwards.

I'm not sure if things have changed since GDC, maybe he knows more now, he can likely speak to the PixelSync
 
But then why would AMD themselves claim in the leaked slide (assuming it's real) that Fiji is their first full DX12 implementation?

It could be just for marketing purposes (need to sell the new cards), or I guess it could be because of the optional features -- AMD can say that Fiji supports absolutely everything that DX12 has to offer.
 
Back
Top