DirectX 12: The future of it within the console gaming space (specifically the XB1)

Christian Gyrling @cgyrling Follow
Being able to unmap/remap memory pages from its virtual address space while maintaining its contents is absolutely amazingly useful. #ps4
Big 64 bit virtual address space is really nice for tricks. We also exploit virtual memory tricks on all platforms (including PC/win64). The only downside for using virtual memory tricks is that your code becomes very hard to port to 32 bit operating systems (if you need to support those).
 
I believe Intel had the problem of being pad limited and having a super pimped up edram process that could manage density that would make Renesas am cry.

An in-depth review in one of the tech sites had a quote from an Intel engineer to the likes of "32 MB would have been enough, but we had no reason not to put 128 MB on there".

Or power/heat. Or killer bees. Or ninjas. It could have been anything, really ;)

Gotta admit, killer beees didn't even cross my mind... :runaway:
 
So bandwidth is not speed, bandwidth is about how much data can be fit. When you do the following statement:
hello = world + hi + var2+ var3+ object12 * happyworld;
Then we're pulling lots of data from different locations all at the same time that having a wide bus can pull from. If I don't have a wide enough bus I will need to spend additional clock cycles gathering that data before I can begin processing.

I have always wondered, what the implications are for the memory arrangement and bus size of the memories in XB1. I really need to do more research into this ...


XB1 DDR3 (off-chip)
- 4 channels x 64bit = 256bit

XB1 ESRAM (on-chip)
- 4 channels x 256bit = 1024bit



PS4 DDR5 (off-chip)
- 4 channels x 64bit = 256bit
 
32 MB is a very small and limited pool. Even Intel used 128 MB of it in it's Iris Pro cards.

yes 32MB is small, but its larger than the 10MB edram in xbox 360.. And as the architects mentioned, xb1 is simply an evolution of the xbox360 design (not just the scratchpad ram BUT lots of other parts of the architecture are just improvements of the xbox360's including the Command Processors)

This was an interesting read 7 years ago of what could fit in the 10MB edram, this time round (xb1) it can fit so much more assuming the developers want to spend the time to use optimize for it..
 
Since this thread is now about bandwidth I found the recommended system requirements for gog's version of system shock 2 interestingly worded.
"512mb DX10+ graphics card with 128bit (or more) memory bus"
 
Big 64 bit virtual address space is really nice for tricks. We also exploit virtual memory tricks on all platforms (including PC/win64). The only downside for using virtual memory tricks is that your code becomes very hard to port to 32 bit operating systems (if you need to support those).
Hmmmmmmm, interesting.... Do you think that could have something to do with the PS4's secret sauce? I wonder if DirectX 12 support that technique.
 
Hmmmmmmm, interesting.... Do you think that could have something to do with the PS4's secret sauce? I wonder if DirectX 12 support that technique.
You don't need DX12 to exploit virtual memory mapping tricks. Windows, Linux/BSD/Unix based operating systems all aupport these. Even Google exploits the virtual memory in some of their (server focused) libraries (hash/sparse maps/arrays). Virtual memory tricks are possible in all 64 bit operating systems. Nowadays (next gen consoles) this also means that games can use tricks like this.
 
Ermmm that quote is from August 2014.

Has anything happened in the last 6 months to refute this quote?

“I think there is a lot of confusion around what and why DX12 will improve,” Torok told GamingBolt.

“Most games out there can’t go 1080p because the additional load on the shading units would be too much. For all these games DX12 is not going to change anything,” Torok explained.

“They might be able to push more triangles to the GPU but they are not going to be able to shade them, which defeats the purpose,”
 
If someone really wanted to be crazy, they could dig through the leaked Xbox One SDK API reference and see if any of the functions map to what's known of Directx12.

... Don't look at me.
 
If someone really wanted to be crazy, they could dig through the leaked Xbox One SDK API reference and see if any of the functions map to what's known of Directx12.

... Don't look at me.
I did this last night looking for split render targets examples which I did find. The bundles and stuff as they wrote earlier were there.

The newer features I did not spot, I wasn't purposefully looking for it though.
 
They've re-iterated points which others have made, notably these two:

“Most games out there can’t go 1080p because the additional load on the shading units would be too much. For all these games DX12 is not going to change anything,” Torok explained.

Torok suggested that an improving ability to hit 1080p resolutions will actually come from a change in the way that “graphics programmers think about their pipelines”, rather than a magic DX 12 update.

So more pixels = more shading which DX12 doesn't help and engne guys need to think more about the pipeline. I'm sure that others, including Sebbbi, have expressed these sentiments in the past. GCN: Think Different(tm).
 
I'm curious about ExecuteIndirect replacing the DX11 DrawIndirect and DispatchIndirect. Pretty much the stuff that was discussed at GDC (Resource Barrier API, the multi-engine, CPU stuff). You could probably tell from the APIs whether it's using the new model or the old one, if it's not right in the SDK release notes.
 
They've re-iterated points which others have made, notably these two:



So more pixels = more shading which DX12 doesn't help and engne guys need to think more about the pipeline. I'm sure that others, including Sebbbi, have expressed these sentiments in the past. GCN: Think Different(tm).
That will likely be a sore spot for Xbox One for a while until they figure out a way around it. Do note that earlier when I made the post about CU saturation and its related article, it is linked to GPGPU performance not game performance.
 
To me, the question is whether DX12 can improve GPU utilization a little bit, as well as reducing CPU use. From watching all of the GDC videos, Directx12 can improve GPU utilization, but I don't know if DX12 has already gained most of this functionality when they added the "fast semantics" API improvements. No matter what, Xbox One's GPU is always going to be shader limited compared to the PS4 GPU, so I wouldn't expect Xbox One to suddenly start pushing 1080p in every game, but that's not really why I'm interested in hearing the details of this stuff.
 
Do we know when we see the first benefits of DX12 in a X1 game?
Is there a chance that Quantum Break can benefit from it, or is it to late for this game?
 
Back
Top