DirectX 12: The future of it within the console gaming space (specifically the XB1)

They showed slides with CPU specs, but nothing really on the GPU - but anyway, NDA is well known to be in place. Search for the leaked ID@Xbox emails for example.
 
5m0zr1ejqnvlns6qn8g6.jpg
 

So I guess this ends all discussions on the matter for DX11.X and what is really going on with X1. There is another slide that indicates low level hardware access included for X1 for DX11.X. So most matters in this thread are concluded concerning the state of DX11.X for X1 today.

The re-implemented of deferred context stood out for me, likely this is the same re-implementation that nvidia borrowed when they updated my Geforce drivers and indicated a huge boost in multithreaded draw call performance for dx11 titles.
 
With DX12+ what will be more effective to maintain stable hight framerate: a sea of low power cores, or something more in line with jaguar?
 
A jaguar is a sea of low power cores isnt it ?

If I understand correctly the 'sea of low power cores' is a cover for MisterX lunacy, the current party line over there is that the 'XB1 has 50 cores' line refers to dozens of ARM cores and that these will be leveraged to deliver 'PC 2.0 Cloud Supercomputer' (which is an ever evolving mix of HSA and cloud paradigms). There is of course research showing that a heterogenous core mix can work quite well on tailored workloads but no evidence whatsoever that it has anything to do with XB1 and on the one console we do have solidly confirmed ARM cores they are completely unavailable to the game O/S.

As to the question what is DX12+? DX12 will bring certain limited improvements over DX11.x that is used today that will undoubtedly improve performance over the existing SDK. As most of the improvements reduce the CPU load of draw calls and make some exiting GCN features available at API level.

As to what will be more effective for a stable high framerate? That's experience with ESRAM, which can only come over time, the DX12 API update will help but it's not about to 'magically' make anything better and I don't think my throwing out random % improvement numbers will help anyone.
 
It won't matter. x86 might make porting game-derived server code easier, but ultimately the cloud HW is completely removed from the console's use of it. Console requests cloud processing, cloud handles it and returns data. It's a black-box solution.
 
A jaguar is a sea of low power cores isnt it ?

Eight cores isn't that many, and they are arranged in a hackish way that makes it very clear they'd be more comfortable with half as many. The cores are also not that low power.

While there is no set in stone threshold for a shift from multiple cores to a manycore system, there is an expectation that the cores and the on-die interconnections would change in design to be able to scale to high counts, readily share resources, and to allow them to work together well.
The consoles strongly recommend avoiding sharing work between the two clusters, and the clusters cannot touch much of the bandwidth.
 
Sorry, I've been a little cryptic
What was really thinking is:
1) in the next console generation
2) when dx12 are affirmed and there's talk about refining it with newer versions
3) considering the direction dx12 is taking
4) would be better
4a) as many core as possible even if this means middle raw performance (es: arm v8 evolution)
4b) a reasonable number of cores between (8 or 12 for example) and the best cores that fit in the tdp (es: amd chupacabra cores)

Ok, there's a little of speculation here
 
8 Jaguar cores in total (two groups of 4 cores) is certainly not many-core, it's still multi-core.

I think many-core starts off with 10 to 16 cores, and goes up from there.
 
To which Stardock's CEO insisted that it will have noticeable effects on performance, he asserted via Twitter.

http://news.softpedia.com/news/Dire...rformance-Stardock-CEO-Reasserts-448091.shtml

So he's backing off his double performance across games on Xbox One statement, to "noticeable effect on performance".

Term "noticeable" could be interpreted as 35% systemwide improvement or improvements in a very narrow and specific areas of primarily cpu processing performance that devs can take advantage of, or 50% fps improvement etc etc. There are a million ways you can interpret the word noticeable.

Well he's playing with words now. Drawcall bound out-of-the-norm demos like his starswarm demo which has 3,000,000 drawcalls per second will see big big improvement, where as taking full advantage of the d3d12 rendering path for what is exactly seen in Ryse or Forza5 or CoD Ghost or practically any other game wouldn't see a substantial gain. You could reduce the api overhead a cajillion percent and the bottleneck would still be on the gpu or memory bandwidth for these games^
 
Last edited by a moderator:
So he's backing off his double performance across games on Xbox One statement, to "noticeable effect on performance".

Term "noticeable" could be interpreted as 35% systemwide improvement or improvements in a very narrow and specific areas of primarily cpu processing performance that devs can take advantage of, or 50% fps improvement etc etc. There are a million ways you can interpret the word noticeable.

Well he's playing with words now. Drawcall bound out-of-the-norm demos like his starswarm demo which has 3,000,000 drawcalls per second will see big big improvement, where as taking full advantage of the d3d12 rendering path for what is exactly seen in Ryse or Forza5 or CoD Ghost or practically any other game wouldn't see a substantial gain. You could reduce the api overhead a cajillion percent and the bottleneck would still be on the gpu or memory bandwidth for these games^

I think the ultimate point of what he's saying is that the threshold for such bottlenecks (which, as you say, aren't magically going away) would likely be more to the benefit of xbox one games as a direct result of devs being able to more directly talk to the GPU in ways that are much more in line with the realities of modern GPU hardware, in addition to also getting more from the CPU cores.

Bottlenecks aren't changing, but perhaps you get notably more with existing hardware capabilities.
 
Repi (Frostbite Engine architect) has an interesting slide deck (15 slides) that was delivered at an Intel day presentation..

"Low Level Graphics API's"

quote: ".. one needs a clean slate API design ... hence Mantle/DirectX12 .. too much legacy in the way with Dx11, GL4, GLES"

http://www.slideshare.net/repii/int...-ae24-52454516bb1a&v=default&b=&from_search=8

Nice read ... lots in there we already knew, but some interesting observations..

I hope we see the "Extensions" he talks about in future API's ;)
 
Filing this one here, could very well be applicable to ESRAM thread as well, but I'll link that article to the esram thread.

CD Projekt Red: DirectX 12 Won’t Help Xbox One With 1080p Issue, But GPU Might be Able to Handle More Triangles

“I think there is a lot of confusion around what and why DX12 will improve. Most games out there can’t go 1080p because the additional load on the shading units would be too much. For all these games DX12 is not going to change anything.”

Full article in link below

http://wccftech.com/cd-projekt-red-directx-12-xbox-1080p-issue-gpu-handle-triangles/
 
Back
Top