DirectX 12: The future of it within the console gaming space (specifically the XB1)

Who knows. I don't think there's a release date for Windows10 on Xbox One, or for Quantum Break yet. No idea which will come first. I'm sure being a first party, they could have access to everything early, but without knowing the state of Windows10 and Directx12 on Xbox One, there's no way to know what they're doing.
 
To me, the question is whether DX12 can improve GPU utilization a little bit, as well as reducing CPU use. From watching all of the GDC videos, Directx12 can improve GPU utilization, but I don't know if DX12 has already gained most of this functionality when they added the "fast semantics" API improvements. No matter what, Xbox One's GPU is always going to be shader limited compared to the PS4 GPU, so I wouldn't expect Xbox One to suddenly start pushing 1080p in every game, but that's not really why I'm interested in hearing the details of this stuff.

I'll take a quick scan tonight when I get home, but honestly I didn't see (though I could have missed it) most of the newer features in the API part of the documentation. I scanned DirectX11 For Xbox One section, and didn't see any functions that sounded remotely close to the slide decks above.

As for being shader limited on a fixed system, the transition to compute sounds promising. Games can't continually look better and better wave after wave if they are hitting the same bottlenecks repeatedly. The industry is transitioning to compute which it should have done years ago, and we have engines now ie Oxide, that are moving towards tiny tasks with little overhead so that any available resources can pick it up and do the work. ExceuteIndrect and the UAV loads is a big thing in increasing GPU utilization. Games have not yet exploited Tiled Resources Tier 1 or Tier 2 either, and Tier 3 was just announced.

There's a lot of moving pieces it seems, and it will be a long time before best performance practices begin to emerge.

edit: besides, as you write, I'm more interested in seeing an improvement in quality, and not necessarily so much in terms of performance (if resolution means that to you).

and edit: yes I love the concept of procedural generation coming back into play, things have been too curated. Would love to see some of those highly curated games have some randomly procedural generated gameplay here and there just so that every run through is not always the same. Sort of like StarControl 2 single player. Man that game is still the greatest game. :love:
 
Last edited:
Do we know when we see the first benefits of DX12 in a X1 game?
Is there a chance that Quantum Break can benefit from it, or is it to late for this game?

Should be Fable Legends. You won't see the same benefits as Oxide/Stardock's game though. But you can see they have already implemented some code for UAV loads and parallel rendering.

As for QB, I think it's too late for them, they'd get as much out of it as Fable Legends would. You'd have to look further down the pipeline - much further down to see native DX12 games.
 
Thanks! I am really looking forward to those games...I wonder if those benefits also directly translate into gameplay (not only more characters, but better physics and AI) or just visuals.

What would be the simplest thing a dev can add to the game when there are some spare CPU cycles?
For instance, we see that apparently when you have some GPU resources left (PS4), just crank up the resolution, i.e. make easy use of available resources and offer some bullet points for marketing :)
Is there an equivalent 'easy to improve' for spare CPU resources?
 
Thanks! I am really looking forward to those games...I wonder if those benefits also directly translate into gameplay (not only more characters, but better physics and AI) or just visuals.

What would be the simplest thing a dev can add to the game when there are some spare CPU cycles?
For instance, we see that apparently when you have some GPU resources left (PS4), just crank up the resolution, i.e. make easy use of available resources and offer some bullet points for marketing :)
Is there an equivalent 'easy to improve' for spare CPU resources?

AI processing tends to come to mind. If it's not too complex, maybe AI that has some minor machine learning like logistic/linear regression built in for specific types of games, or maybe a small neural network for more complex behaviours. In half-life AI was controlled by hotspots on the map, they would run to specific locations that made sense relative to where the player was standing. But if you trained the AI a lot with a simpler 3 to 5 layer neural network, it would adapt to a large variety of situations, it would give the game a more natural look and feel.
 
Big 64 bit virtual address space is really nice for tricks. We also exploit virtual memory tricks on all platforms (including PC/win64). The only downside for using virtual memory tricks is that your code becomes very hard to port to 32 bit operating systems (if you need to support those).
It sounds like the use of that feature as a differentiator is moot then.. Either that or DirectX 11 didn't support those tricks to maintain 32 bit OS compatibility.

Did the first 64 bits consoles like the Nintendo 64 and the Atari Jaguar benefit from that "64 bit virtual address space" trick too?
 
Who knows. I don't think there's a release date for Windows10 on Xbox One, or for Quantum Break yet. No idea which will come first. I'm sure being a first party, they could have access to everything early, but without knowing the state of Windows10 and Directx12 on Xbox One, there's no way to know what they're doing.
checked again and nothing is there.
 
It sounds like the use of that feature as a differentiator is moot then.. Either that or DirectX 11 didn't support those tricks to maintain 32 bit OS compatibility.
The original source (Gyrling twitter post) said nothing about graphics rendering or GPU. I believe he was talking about their CPU code. Virtual memory (partial mapping / remapping) is very useful for sparse containers for example. These kind of techniques haven't been available for game developers until recently. Next gen consoles are now 64 bit and most PC gamers have now updated to 64 bit operating systems. I fully understand the excitement of other developers, because our new component data model heavily relies on virtual memory tricks (it reduces a huge amount of hash lookups, and linearizes the memory access patterns).

Modern GPUs are not yet able to change the page mappings themselves. This could be a major security threat. However on some platforms virtual address space is shared between the CPU and the GPU, making virtual memory tricks also partially applicable on the GPUs (however the CPU still must do all the mapping changes). The newest (HSA) AMD integrated GPUs and Intel Broadwell share the virtual address space between CPU and GPU and offer cache coherence. OpenCL 2.0 at least seems to support this right now. CUDA 6 (http://www.anandtech.com/show/7515/nvidia-announces-cuda-6-unified-memory-for-cuda) seems to also support unified virtual memory (for discrete GPUs).
 
Did the first 64 bits consoles like the Nintendo 64 and the Atari Jaguar benefit from that "64 bit virtual address space" trick too?
Jaguar had 64 bit ALUs for graphics processing (it didn't support 64 bit memory addressing). Similarly you could call Intel AVX CPUs 256 bit CPUs or you could even call the PS4 and Xbox One 512 bit architectures (since the four SIMD units in each GCN CU are 512 bit wide). The 64 in the names of these consoles is mainly for marketing purposes.
 
I don't want to derail the AMD 390 thread so I'm writing this here:
https://forum.beyond3d.com/posts/1831601/

AMD writes:
World's First Discrete GPU with Full DirectX12 implementation - Resource binding Tier 3

I guess what stuck out was the word discrete for me, since if they were just the first full DX12 implementation, they would just say it and not tack on discrete. This leads me to believe,unless they had a marketing oopsie, that the first fully DX12 implemented GPU could very well be the XBO like many rumours have suggested and not suggested.

Since XBO is SOC, the statement would be still true, that fact that it's tacked onto the statement could be some sort of admission that it wasn't first across the entire market.

Having said that, I guess the case isn't closed on the matter
 
I don't want to derail the AMD 390 thread so I'm writing this here:
https://forum.beyond3d.com/posts/1831601/

AMD writes:


I guess what stuck out was the word discrete for me, since if they were just the first full DX12 implementation, they would just say it and not tack on discrete. This leads me to believe,unless they had a marketing oopsie, that the first fully DX12 implemented GPU could very well be the XBO like many rumours have suggested and not suggested.

Since XBO is SOC, the statement would be still true, that fact that it's tacked onto the statement could be some sort of admission that it wasn't first across the entire market.

Having said that, I guess the case isn't closed on the matter

Perhaps Carrizo has the same GPU architecture and will be launched at the same event as Fiji.
Or perhaps Nvidia's Tegra X1 uses Maxwell 'gen3' and they are counting that.
 
Very true. Good points here. I've got serious patterned bias on this front. I really need to get away from Xbox
 
This leads me to believe,unless they had a marketing oopsie, that the first fully DX12 implemented GPU could very well be the XBO like many rumours have suggested and not suggested.

Since XBO is SOC, the statement would be still true, that fact that it's tacked onto the statement could be some sort of admission that it wasn't first across the entire market.


Or it could be an iGPU that wasn't finalized somewhere in late 2012 or early 2013.. like Broadwell's.
 
Back
Top