DirectX 12: The future of it within the console gaming space (specifically the XB1)

It could be just for marketing purposes (need to sell the new cards), or I guess it could be because of the optional features -- AMD can say that Fiji supports absolutely everything that DX12 has to offer.

Or maybe fuji has fast ROV implementation... No idea for speed of conservative rasterization on pre Fiji GCN...
 
Interesting!

Earlier we had a similar discussion, @Andrew Lauritzen weighed in his thoughts on GCN and DX12 feature set here:
https://forum.beyond3d.com/posts/1823113/

Sebbbi follows up shortly afterwards.

I'm not sure if things have changed since GDC, maybe he knows more now, he can likely speak to the PixelSync

Or maybe fuji has fast ROV implementation... No idea for speed of conservative rasterization on pre Fiji GCN...

Yeah well... I guess speed isn't the same as supporting these features. To my best knowledge, the hardware is there but maybe AMD won't even bother with it and just sells the Fiji as a proper DX12 card.
 
I guess this comes down to whether a particular feature is fast enough to be practically usable. If it's possible through hardware but in such a convoluted way that it's too slow to be of any practical use then it could be argued that it's not really supported at all. Perhaps Fiji simply implements these features in a more practical/usable way which is why AMD is drawing the distinction between it and older GPU's.
 
Yeah well... I guess speed isn't the same as supporting these features. To my best knowledge, the hardware is there but maybe AMD won't even bother with it and just sells the Fiji as a proper DX12 card.
no worries, stick around. It's a good spreadsheet, we don't have enough people that invest effort into doing those things. Arwin's got a website for games, but yea, things like this are helpful for people who are looking to buy video cards - I can only imagine the confusion going forward as to what features are and aren't properly supported.
 
no worries, stick around. It's a good spreadsheet, we don't have enough people that invest effort into doing those things. Arwin's got a website for games, but yea, things like this are helpful for people who are looking to buy video cards - I can only imagine the confusion going forward as to what features are and aren't properly supported.

Thanks. I should've made it clear that it's not official or anything it's just the current best "guess" -- backed up by some industry insider sources.

What we know for sure is that currently the max DX12 feature level is 12_1 which requires at minimum tier 2 resource binding and tier 2 optional Typed UAV loads. ROVs and conservative rasterization are both optional, so both the Maxwell 2 and the GCN are currently DX12 feature level 12_1 capable. Microsoft made the tier 3 resource binding basicly for the Xbox One, but because GCN is also available in PCs, they just brought is over.
 
Thanks. I should've made it clear that it's not official or anything it's just the current best "guess" -- backed up by some industry insider sources.

What we know for sure is that currently the max DX12 feature level is 12_1 which requires at minimum tier 2 resource binding and tier 2 optional Typed UAV loads. ROVs and conservative rasterization are both optional, so both the Maxwell 2 and the GCN are currently DX12 feature level 12_1 capable. Microsoft made the tier 3 resource binding basicly for the Xbox One, but because GCN is also available in PCs, they just brought is over.

What I mean by this is that it doesn't makes any sense that AMD promos the Fiji as the first DX12 GPU on that slide, because clearly, even the first GCN is fully DX12 12_1 capable.
 
ROVs and conservative rasterization are both optional

Do you have a source for that? I thought ROV and CR where both mandatory requirements for FL 12_1. If these features aren't mandatory then what features make up the mandatory feature set that is 12_1?
 
Technically AMD is not wrong as Fiji is the first GPU with the full DX 12 specs. It´s the first card that supports all that DX 12 has to offer!
Maxwell 2 is feature level 12.1, but it is not tier 3.
The first Feature level 12.2 and tier 3 GPU will be Fiji.
Current GCN cards are not feature level 12.1, just feature level 12.0, although they are tier 3.
 
Do you have a source for that? I thought ROV and CR where both mandatory requirements for FL 12_1. If these features aren't mandatory then what features make up the mandatory feature set that is 12_1?

Yeah seems like you are right. Sorry, my source was outdated on that I thought those were still optional.
 
Technically AMD is not wrong as Fiji is the first GPU with the full DX 12 specs. It´s the first card that supports all that DX 12 has to offer!
Maxwell 2 is feature level 12.1, but it is not tier 3.
The first Feature level 12.2 and tier 3 GPU will be Fiji.
Current GCN cards are not feature level 12.1, just feature level 12.0, although they are tier 3.

Well it still doesn't make sense that they call the 290X a DX 11.2 GPU because at least it's 12.0
 
Even the first GCN uses manual interpolation as opposed to fixed function interpolation in other architectures,
By "manual interpolation", you mean interpolation using shader instructions? NVidia got there years earlier.

so conservative rasterization shouldn't be a problem with new drivers.

I don't see how interpolation is relevant to rasterisation. How does shader based interpolation make conservative rasterisation work at full rasterisation rate? By the time you've launched a fragment from the rasteriser, it's too late to "fix it up" to make it a conservatively rasterised fragment. Conservative rasterisation requires a distinct set of edge equations and/or rasteriser-walk algorithms.

As for ROVs, it's the same as Intel Pixelsync, and GCN already supports that in OpenGL.
There's a strong concensus that it's a software kludge. Perhaps one that's heavily bottlenecked by GDS. I honestly haven't thought about algorithms to do this in software in GCN, though.
 
By "manual interpolation", you mean interpolation using shader instructions? NVidia got there years earlier.

MSAA rasterization with corner point pattern + centroid could yield conservative pixels being spawned I think:
https://msdn.microsoft.com/en-us/library/windows/desktop/cc627092(v=vs.85).aspx

Depends if the MSAA pattern can be pushed into the corners, and if the spawning of multiple MSAA pixel shaders can be supressed, and if centroid sampling is unavailable under CR so that it can be repurposed and "patched" into the shader instructions. If there is a will there is a way. :)
 
I need to sleep on it, I haven't a clue what you're getting at with "suppression"...

As far as I can tell pushing sample positions into corners doesn't work (the acute end of a triangle can easily squeeze through multiple pixels, once it's less than one pixel wide).
 
I have a layman question. Does the versioning of GCN encompass the whole gpu? It seems odd that we reference GCN 1, 2 and 3 or 1.0, 1.1 and 1.2 without mentioning that AMD seems to encapsulate GCN versioning around only the changes of the compute cores. Even when looking at old figures of the overview of GCN based gpu, AMD made it a habit of labelling each of the individual SIMD as "GCN".
 
MSAA rasterization with corner point pattern + centroid could yield conservative pixels being spawned I think
As Jawed says, this is insufficient. Triangles can intersect a pixel without intersecting any of the corners. They can also be fully contained within a pixel without touching any of the corners. Both of these cases need to be handled with conservative rasterization. It gets even more finicky when it comes to accounting for the error introduced by fixed point snapping in the rasterizer and primitives that are snapped to degenerate lines/points.

I'll leave you guys to ponder those details and how they might map to "tiers" of hardware support :)
 
Gotya. I "forgot" that triangles can fall into the void between sample-points for MSAA too.
In any case I would love to see actual hardware implementation logic. It's difficult to understand exactly the bad cases of GPU SIMD processing in the graphics pipeline (SIMD behaviour in all pipeline stages). Occupancy-numbers lead to trial and error propramming.

I need to sleep on it, I haven't a clue what you're getting at with "suppression"...

I think for MSAA the hardware must somehow decide how to allocate lanes (threads) for a triangle which consists of multiple and single fragments. It still needs to form quads. So does it have a puzzle-solver ALU? What are the shapes of 100% efficient triangles, and what is the shape of 50% efficient ones? If they are simple grid-positioned, then triangle alignment (in the sense, is the long side touching the lattice or the diagonal side) matters for performance.
 
Last edited:
This spreadsheet from GAF. I'm not sure if accurate but wow amazing if true.

http://m.neogaf.com/showpost.php?p=156266935

He/They was/were specific on hardware versions so I'm a bit confused cause if this is true then the first fully implemented dx12 GPU would be well, HD Radeon 7790.
Whatever the reality is, it's quite surprising how computer technology has evolved in the last two decades! I recently discovered something interesting. A bit of history... thanks to an epic fail of the game The Lion King on some PCs and Windows 95, Microsoft developed DirectX.
 
Back
Top