DirectX 12: The future of it within the console gaming space (specifically the XB1)

Brilliant ... more GDC2015 sessions are up, this time of MS's OEM partners (AMD/Intel/Nvidia) and even Firaxis of StarCitizen is talking Dx12 :)




Advanced Visual Effects With DirectX 11 & 12: Welcome/Getting the Most Out of DirectX12
Speakers: Nicolas Thibieroz (AMD), David Oldcorn (AMD), Evan Hart (NVIDIA)

DirectX12 represents the start of a new era for graphics development. Programmers are now empowered to leverage GPU resources and exert a level of control so far unprecedented in standard graphics APIs. In this talk, AMD and NVIDIA will discuss the new programming model and features of the new API. This is an advanced tutorial, for developers familiar with graphics programming, on how to start developing efficient and effective D3D12 applications straight away, packed with useful tips and insights.




Advanced Visual Effects With DirectX 11 & 12: Visual Effects in Star Citizen
Speaker: Alistair Brown (Cloud Imperium)

A detailed look into the visual effects in development for the crowd funded open world space game Star Citizen and its single player military counterpart Squadron 42. This includes the rendering and lighting of volumetric gases for everything from smoke trails and massive explosions to gas-clouds several hundred miles across. Other rendering effects such as our ship damage system and shield rendering solution will also be presented.




Advanced Visual Effects With DirectX 11 & 12: Advancements in Tile-based Compute Rendering
Speaker: Gareth Thomas (AMD)

Tiled deferred rendering and Forward+ rendering are becoming increasingly popular as efficient ways to handle the ever increasing numbers of dynamic lights in games. This talk looks at some of the most recent improvements to this approach as well as exploring the idea of clustered rendering.




Advanced Visual Effects With DirectX 11 & 12: Sparse Fluid Simulation and Hybrid Ray-traced Shadows for DirectX 11 & 12
Speakers: Jon Story (NVIDIA), Alex Dunn (NVIDIA)

This session will cover two high end techniques that benefit from advanced GPU hardware features. High resolution fluid simulation in games has always been problematic, but with tiled resources it's possible to optimize for memory footprint, simulation cost and rendering efficiency. Conventional shadow mapping has its pros and cons, this hybrid technique combines the best of shadow mapping with ray-traced shadows using conservative rasterization.




DirectX 12: A New Meaning for Efficiency and Performance (Presented by AMD)
Speakers: Dave Oldcorn (AMD), Stephan Hodes (AMD)

Direct3D 12 adds key new rendering features such as multiple queues for asynchronous compute and DMA, and the ultra-performance API both eliminates performance bottlenecks and enables new techniques. AMD will talk about the key interactions between the new D3D12 capabilities and AMD hardware and how to get the best from both. This session will include live demos.




OIT to Volumetric Shadow Mapping, 101 Uses for Raster Ordered Views using DirectX 12 (Presented by Intel)
Speaker: Leigh Davies (Intel)

One of the new features of DirectX 12 is Raster Ordered Views. This adds Ordering back into Unordered Access Views removing race conditions within a pixel shader when multiple in flight pixels write to the same XY screen coordinates. This allows algorithms that previously required link lists of pixel data to be efficiently processed in bounded memory. The talk shows how everything from Order Independent Transparency to Volumetric shadow mapping and even post processing can benefit from using Raster Ordered Views to provide efficient and more importantly robust solutions suitable for real time games. The session uses a mixture of real world examples of where these algorithms have already been implemented in games and forward looking research to show some of the exciting possibilities that open up with this ability coming to DirectX.
 
The 3 OEM's are talking now regarding Dx12 which is a great thing ... Looking forward to the unified Dx12 across XB1 and PC having a big impact this generation ..
 
What use did that have though? How much action did XB360's tessellator see, for example? I don't recall anything game-changing in the end results. Any DX10 like features were premature because the tech wasn't ready (as is always the case for first-gen DXNew features. Takes a second wave of GPUs to refine the ideas into something practical). AMD offered a DX11 part. MS could tweak it, but not redesign it (AMD would need a year or three to create a new GPU on a new spec that's radically different to the old one), so they're left with a DX11 part with maybe a couple of DX12 niceties regards memory addressing or somesuch. What they won't have is a higher shader model than PS4, or extra functional blocks, or anything significantly different. Similar to PS4 not having anything significantly different save a prod on the GPU<>CPU communication lines and a poke on the ACEs. The difference in real terms to what DX12 enables on XB1 over PS4 will be lost among the many other variables affecting what's on screen. It'll just be the driver overhead and ability to feed the GPU that matters. The only area that's perhaps a little grey, which I consider low probability, is the second GCP makes a significant difference. Perhaps, the second GCP gets better utilisation of the GPU than compute does in real workloads, and the X1 efficiency and max utilisation ends up more than PS4's?
Lol I didn't even know 360 had a tessellator. I was actually thinking of unified shader model/shader model 4.0 that was released with dx10 being a good thing for Xbox 360.

I don't expect any grand changes, maxwell to maxwell 2 wasn't a redesign, but the subtle changes can enable some features maybe previously harder to accomplish. Like your tessellation example even if VXGI was accelerated on XBO it may not have the horsepower to do it so we won't see it implemented anyway.

I'm not even sure if the 2nd GCP is a factor, I was looking at maxwell 2 diagrams and unfortunately I don't see any resemblances there.

But there could be subtle other things, I'm not expecting anything large but the difference between tier 1 and tier 2 tiled resources is somewhat impactful, than one that supports volume tiled resources and one that doesn't is impactful (at least for voxels). If minor changes can result in speeding up certain scenarios that would be ideal in terms of support.

It should be noted that we have not yet seen 1 game leverage tiled resources yet IIRC!

With respect to XBO, any minor performance enhancing feature will help with its vastly weaker GPU. As the games continue to evolve ps4 will rely on its available muscle, Xbox one will have to rely on its featureset if it has it. Ive accepted that Xbox one will never surpass ps4 in the generic sense but it may be able to keep up in specific scenarios.

Edit: I'd rather take the versus discussion out because it muddies things. What if ps4 didn't exist did MS really just hand out a console that had no chance of delivering great looking games in 2017? What is the strategy to obtain performance? If consoles are limited by TDP and wattage then I would be looking at other avenues. The Japanese have been limited by CC sizes of their engines (cost to sell rises with CC and HP and gas is expensive )so they invested heavily into valve timing technology. The birth of VTEC.

I'm not holding my breath on this dual GCP thing, that's going to be a big let down when we find out otherwise. 10-15% improvement if real is already massive.
 
Last edited:
Edit: I'd rather take the versus discussion out because it muddies things.
The versus discussion only exists here as a comparison point as to a 'non DX12 architecture' and XB1, with whatever differences XB1 has being areas where DX12 may have a greater benefit. the impact of DX12 on consoles extends beyond the API, because the methods can be rolled into other systems if not already present. eg. If DX12 greatly improves XB1 efficiency and doesn't rely on specific hardware features to enable that, the same principles can presumably be applied to PS4 SDK2. This is actually important for cross-platform titles and the importance/impact of DX12 on XB1 is going to be affected by how well it can or can't be applied in games to PS4 (and possibly older PCs?).

GDC should explain a lot more on hardware requirements.
 
What use did that have though? How much action did XB360's tessellator see, for example? I don't recall anything game-changing in the end results. Any DX10 like features were premature because the tech wasn't ready (as is always the case for first-gen DXNew features. Takes a second wave of GPUs to refine the ideas into something practical). AMD offered a DX11 part. MS could tweak it, but not redesign it (AMD would need a year or three to create a new GPU on a new spec that's radically different to the old one), so they're left with a DX11 part with maybe a couple of DX12 niceties regards memory addressing or somesuch. What they won't have is a higher shader model than PS4, or extra functional blocks, or anything significantly different. Similar to PS4 not having anything significantly different save a prod on the GPU<>CPU communication lines and a poke on the ACEs. The difference in real terms to what DX12 enables on XB1 over PS4 will be lost among the many other variables affecting what's on screen. It'll just be the driver overhead and ability to feed the GPU that matters. The only area that's perhaps a little grey, which I consider low probability, is the second GCP makes a significant difference. Perhaps, the second GCP gets better utilisation of the GPU than compute does in real workloads, and the X1 efficiency and max utilisation ends up more than PS4's?
The tesslator of the xbox 360 had nothing to do with Tesslation in DX10/11. It was more or less a relic/experiment that was just inactive in the GPU until MS found a way to actually present it to developers. I would go so far and say, that it wasn't a planned feature from MS but better to use it somehow than let it be inactive on the DIE. As far as I know it was used in Gears of War 2/3. But what it really did, I can't tell. But the real DX10 difference were the unified shaders like iroboto wrote. And that wasn't a minor difference.
 
Yes in that scenario it is worth mentioning since we know at the very least ps4 should be able to carbon copy d3d12 (if it doesn't already have it).

Sony has been so tight lipped, it's not that ps4 isn't worth talking about in this regard but there's unfortunately nothing to talk about.

I'm going to assume that all d3d12 features will be applied to ps4. Featureset 12_0 whatever it is will be interesting to see how it plays out. Looking at what developers are tech demoing with tiled resources is interesting with respect to using tiled resources for GI and shadow maps.
 
The tesslator of the xbox 360 had nothing to do with Tesslation in DX10/11. It was more or less a relic/experiment that was just inactive in the GPU until MS found a way to actually present it to developers. I would go so far and say, that it wasn't a planned feature from MS but better to use it somehow than let it be inactive on the DIE. As far as I know it was used in Gears of War 2/3. But what it really did, I can't tell. But the real DX10 difference were the unified shaders like iroboto wrote. And that wasn't a minor difference.
Right, but that was a mind numbingly obvious, highly discussed difference between the architectures. We're not talking that level of difference between DX11 and 12, a whole architectural paradigm delta between XB1 and PS4, so it doesn't enter into the argument. Looking at DX10's feature set and Tessellation, XB360 had tessellation hardware but wasn't a DX10 part. It's just as possible that XB1 has features that mirror/shadow DX12 hardware requirements but aren't directly comparable. That's actually more likely given everything known about the similarities of machines and timelines. XB360, for example, was released one year ahead of DX10 and wasn't DX10 compliant in its tesselator. XB1 has released some two years before DX12 is due to be released. Ergo, it'd be wrong to assume that because MS was planning DX12, they'd have fully integrated DX12 into their GCN-based GPU. We have precedent showing that prior knowledge of future APIs doesn't guarantee early hardware support.

But this is all fairly irrelevant discussion until we know exactly what DX12 is and what hardware features it can make use of! Best guess, nVidia Maxwell is claiming DX12 compliance, so that sets a minimum set of hardware features required which, AFAIK, don't include anything wildly deviant from GCN etc.
 
Yes in that scenario it is worth mentioning since we know at the very least ps4 should be able to carbon copy d3d12 (if it doesn't already have it).

Sony has been so tight lipped, it's not that ps4 isn't worth talking about in this regard but there's unfortunately nothing to talk about.

I'm going to assume that all d3d12 features will be applied to ps4. Featureset 12_0 whatever it is will be interesting to see how it plays out. Looking at what developers are tech demoing with tiled resources is interesting with respect to using tiled resources for GI and shadow maps.
I asume that most if not all 'features' in DX12 are already available trough GNM.
We know vertex interpolators are available in pixel shaders and so on.

Really hope such features will be exposed in DX12, although I wonder if it is possible without creating own DX12 path for each vendor.
 
Lol I didn't even know 360 had a tessellator. I was actually thinking of unified shader model/shader model 4.0 that was released with dx10 being a good thing for Xbox 360.

Allandor said:
But the real DX10 difference were the unified shaders like iroboto wrote. And that wasn't a minor difference.

I don't think unified shaders were ever a "feature" of DX10. They were just an implementation detail of the GPU's that happened to coincide with the introduction of DX10 as far as I'm aware.

Also, I didn't think the 360 supported the full DX10 shader model 4.0 unless anyone more knowledgable knows otherwise?
 
Last edited:
I don't think unified shaders were ever a "feature" of DX10. They were just an implementatio detail of the GPU's that happened to coincide with the intruduction of DX10 as far as I'm aware.
DX10 introduced unified shaders specifically to match the hardware, no? Otherwise there's not much point to US in the API as it'd be broken into individual shaders. Were there any graphics cards released that were DX10 and not US?

Also I didn't think the 360 supported the full DX10 shader model 4.0 unless anyone more knowledgeable knows otherwise?
It wasn't. Like XB1 is described as DX11+, XB360 was described as DX9+ (and original XB was DX7+, no?). The consoles have supported custom extensions to access specifics of the hardware, but not yet been fully DXNext compatible. Again, DX12 on XB1 is a curiosity as we don't know what the hardware requirements are.
 
Also, I didn't think the 360 supported the full DX10 shader model 4.0 unless anyone more knowledgable knows otherwise?

They were missing a couple things like Geometry Shader & Stream Out (not quite the same as memexport) - although even if it did have that, I doubt GS performance would have been conducive to using it often.

Even the tessellator was only used in few exclusives - Halo 3+, Halo Wars, Gears 2 - all related to liquid surfaces. Viva Pinata garden. I was never clear if Epic/Bioware used it similarly for the various terrain maps in Mass Effect. Maybe one or two other 3rd party games turned it on for something, but nothing to write home aboot. Frostbite and Anvil didn't bother in the end for their terrain/water.

There were a couple surface format issues, although I don't think dx10 required them per se. Most PC designs had settled on supporting newer formats and blend modes (like FP16 blend, which R580 and G70 already had). There were probably some other gotchas with filtering said surfaces. Just one of those silly missed opportunities. I sort of recall something about cubemap arrays as well.

Also missing some more modern Z/depth algorithms that were already on PC, but again, don't think they were a requirement.

(and original XB was DX7+, no?)
Something more akin to DX8.1, IIRC. They apparently went pretty low-level as well.

Gamecube was the "dx7-like" one with HW T&L, I thought.
 
Last edited:
DX10 introduced unified shaders specifically to match the hardware, no? Otherwise there's not much point to US in the API as it'd be broken into individual shaders. Were there any graphics cards released that were DX10 and not US?

As far as I recall it was just coincidental timing rather than part of the DX10 spec. I don't think there was ever a DX10 GPU released without unified shaders but G80 was originally rumored (perhaps even planned) to be a non unified, DX10 design.

I did a quick search and was able to find this although quite a few other links said similar. It may be that they were simply incorrect rumors though:

http://hothardware.com/news/Unified-Shaders-Not-Required-For-DX10-Support
 
As far as I recall it was just coincidental timing rather than part of the DX10 spec. I don't think there was ever a DX10 GPU released without unified shaders but G80 was originally rumored (perhaps even planned) to be a non unified, DX10 design.
This is interesting because I was also under the impression that DirectX 10 compatibility required a unified shader architecture GPU but perhaps the requirement is only to allow interface with applications expecting a unified shader architecture and smart drivers can divvy up what's what.
 
This is interesting because I was also under the impression that DirectX 10 compatibility required a unified shader architecture GPU but perhaps the requirement is only to allow interface with applications expecting a unified shader architecture and smart drivers can divvy up what's what.

The way it works is that D3D really just specifies what functionality is available. It really doesn't have a lot to say about how something happens, it's more concerned with what you can do. Obviously the specified functionality is generally designed around the real capabilities of existing and future hardware, but it's not strictly required that a D3D-capable device conforms to a particular hardware spec.

So regarding the unified shaders...in DX8 and DX9 you had different instruction sets for vertex shaders and pixel shaders. There were a lot of instructions that were common between the two (add, multiply, dot product, etc.), but also many that weren't. For instance vertex shaders could dynamically index into constant registers, but pixel shaders could not. Pixel shaders could sample textures, but vertex shaders could not (at least until SM 3.0 in DX9, where it became an optional feature). When DX10 rolled around they introduced the "common shader core", which essentially unified the instruction set for vertex and pixel shaders (albeit with a few exceptions). This made sense, since at the time GPU's were beginning to switch over from having dedicated VS/PS hardware to having "unified" hardware for both (as in the Xbox 360). At the same time, having the common shader core combined the addition of a new shader stage (the geometry shader) gave hardware vendors even more motivation to switch to having unified shader hardware. However if someone wanted to, they could have made a GPU with dedicated VS/GS/PS units in the chip and it still could have been considered DX10-capable as long as it supported the necessary instruction set. It probably would have made very little sense to make such a chip (at least for AMD and Nvidia), but it would have been possible to do so.

Where this idea really comes into play is when newer hardware comes out that still supports older API's. For instance all recent GPU's have programmable shader hardware, but it's still valid for them to support the old DX7 fixed-function pipeline, which they do by emulating the pipeline using shader programs. Or as another more-recent example DX11 specifies a register-based interface for binding textures to shaders, but under the hood AMD's GCN GPU's don't actually have any physical registers for texture descriptors - instead the shader units pull the descriptors directly from memory. In both cases the newer GPU's still fulfill the old specification, even though the newer hardware no longer resembles the old hardware which the spec was based around.
 
DX12 architecture

I just don't understand why people here refer to DX12 as an "architecture". It's an API. Calling HTML5 an "architecture" will make exactly the same sense.
Modern GPU is a general purpose processor, it can be "programmed" to support any API. The question could be if some of DX12 paths will be fast enough, probably. But that's it.
 
I just don't understand why people here refer to DX12 as an "architecture". It's an API. Calling HTML5 an "architecture" will make exactly the same sense.
Modern GPU is a general purpose processor, it can be "programmed" to support any API. The question could be if some of DX12 paths will be fast enough, probably. But that's it.

Well ... DX12 isn't an architecture, you're quite right. But while HTML 5 can be realistically and practically implemented on most general purpose CPUs, DX12 cannot.

Ultimately, anything can be implemented on a CPU (general purpose processor) if you ignore speed. But if you ignore speed, what is the point in giving a shit about architecture ...?

I basically agree with you, but the nuances are always important. So, like, nuance ... and stuff.
 
dx12 is an API exposing a number of features and lower level than dx11.
Whether it's any good is yet to be determined.
(Good = simple, minimal & complete)
 
(Good = simple, minimal & complete)

It's not going to be simple, as in "high level" api that an average level dev will be able to code in .. It's extremely low level , surfacing more of the bare HW for devs to do more with.

They've , MS and OEM partners, have made it clear that Dx12 is now stripped down to the bare bones leaving the app developer to do more work to get even the most basic of rendering ...

And as for minimal, minimal in what sense? Its going to be a thinner layer that does less than what Dx11 drivers do... BUT its not minimal in the sense that devs will need to code more for lack of out of box features in the dx drivers.

Dx12 from everything they've shown/talked to date, is targeted at engine architects, experienced low level devs that are currently hindered with dx11's drivers ..
 
But if you ignore speed, what is the point in giving a shit about architecture ...?

That's right, but I don't see anything that requires specific hardware in current DX12 prospects. It's a lot like Mantle (and I mean almost identical) it uses different names for same concepts, but the result is just a "low level DX11", i.e. giving user a way to do more things in their code that was done by driver in DX11. It's still "beating a dead horse". i.e. we still use CPU for everything.
 
Back
Top