DirectX 12: The future of it within the console gaming space (specifically the XB1)

You'd consider a 30% pay rise a modest increase? 30% less revenue in a year a modest decrease? 30% faster on your personal best lap time a modest improvement? You're a difficult man to please! (Now substitute those 30% with 100% as per your notion of doubling being modest!).

To me an order of magnitude is substantial in a technology discussion. That is the increased level of performance increase necessary to achieve a meaningful change. 2 x performance isn't going to make a meaningful change. The gap between ultra settings on a Titan and low on a budget card isn't all that large and that is probably an order of magnitude.
 
This guy also says: "It gives us more control over multi-threading which in turn has less CPU overhead. Our engine is massively multi-threaded.

The render team can talk more on the technicalities as it's rocket science to me."

Assuming that he is right though for the sake of it, then it would confirm PCars runs GNMX, which is Sony's path for easy DirectX ports ...

The guy says it's rocket science to him. I wouldn't read too much into his answers.
 
To me an order of magnitude is substantial in a technology discussion.
An order of magnitude is the difference between one generation of consoles and its predecessor 5+ years earlier. You don't get software tweaks resulting in orders of magnitude improvement save in specific algorithms sometimes, where a 100 fold decrease in time take to do something results in a 2ms gain on a 17 ms frame and that's consider a significant optimisation @ ~10%.
 
Directx 12 savvy guys, even if what he says is purely conjectural, please could you tell me what the hell is he talking about?



http://forum.projectcarsgame.com/sh...ame-rate-issue&p=942715&viewfull=1#post942715
If there is something missing in GNM, that Xbox One will gain in 12, the only thing I can think of is multithreaded command buffer and signalling/triggers - as that is what Xbox stands to gain (main items in DX12) compared to what it has already. Otherwise this doesn't make sense to me. I've talked to Max, and multi-adapter presentation presented at Build 2015 was the final main feature to be announced; there are no hidden features, so Ian can't be talking about NDA stuff.
 
An order of magnitude is the difference between one generation of consoles and its predecessor 5+ years earlier. You don't get software tweaks resulting in orders of magnitude improvement save in specific algorithms sometimes, where a 100 fold decrease in time take to do something results in a 2ms gain on a 17 ms frame and that's consider a significant optimisation @ ~10%.

I really appreciate it when people make my points for me! Thanks for helping out.
 
I really appreciate it when people make my points for me! Thanks for helping out.

Sooo...in a hypothetical scenario, could we have expected this from you?

"Ohhh my god!!!!!"
"Did MS use 'substantial' in describing the performance improvement of the Xbox One when moving to DX 12!!!!!?"
"That means MS released two generation worth of consoles in one device!!!!!"

:runaway:.........faints


LOL.
 
Last edited:
There is such a thing as context. Maybe in the grand scheme of things, nothing below an order of magnitude gets people out of bed.
This is true when evaluating switching to something exotic and difficult versus a well-established ecosystem with proven results, say HPC GPU compute or esoteric vector processors versus high-end CPUs.
The impact of the gain, versus the downsides and the inexorable growth of the tried and true, requires it until something physically stops the status quo from improving.
Order of magnitude is also frequently not used when discussing "major" gains. Order of magnitude is used when discussing gains called "order of magnitude gains".

In this context, 200% performance from a revamp of one ecosystem component with no other disruption would be quite major.
CPU architectures fight and die over performance gains measured in tens of percent per generation, because there are areas where every bit is critical, and the economic and development upsides are that high-stakes.
For all the stories of projects in alpha or beta with months to go to get their 15-20 FPS games playable, a change like this would have basically removed a massive amount of development strain at the start of this generation.
It would max out at playable frame rates almost all the TVs this console generation would face until its declining years.
The whole kerfluffle over ESRAM and the frame rate/HD-not-HD debate would have never happened.

Once you get past the fact that this universe will probably reach heat death due to entropy in googleplex to the ubsurdeth power years, this is worth getting out of bed for if we're talking about games.

That's not to say that we should expect that kind of gain in this situation, but let's not be that jaded in the opposite direction.

edit: changed from 200% gain in to "200% performance from"
 
Last edited:
I have spent a week optimizing a 40 instruction (ALU bound) shader, and in the end I got rid of a single instruction from it (2.25% gain for 2 millisecond period). It was worth it. Console graphics programming has never been about what you can do. That 16.6 millisecond time slice tells you what you can do, you just try to cram in as much as possible and cut out the rest.

That said. 30% total improvement is huge. Especially for 60 fps games.

However, in every generation of consoles, algorithmic improvements have improved the perceived graphics quality (you could call it "performance") by more than the hardware improvement. This generation is no exception.
 
If there is something missing in GNM, that Xbox One will gain in 12, the only thing I can think of is multithreaded command buffer and signalling/triggers

hmm? mind to explain what do you mean exactly by that? As far as I understood, GNM supports multiple command buffer submitting.

I have spent a week optimizing a 40 instruction (ALU bound) shader, and in the end I got rid of a single instruction from it (2.25% gain for 2 millisecond period). It was worth it.

hmm that's a satisfaction :)
passed are the days I could muck with low level optimizing, I still remember turning a decompression algorithm 300% faster turning 5% in massively optimized assembly... eh. I miss a bit those times...
 
hmm? mind to explain what do you mean exactly by that? As far as I understood, GNM supports multiple command buffer submitting.
Iroboto is trying to reconcile the knowns with the unknowns. Given claims DX12 features added to PS4 will result in improvements in pCARS, and given understanding of what the DX12 features missing from XB1 are, one would assume that the same things are absent in the PS4 library being used by SMS. If GNM(X) already has the major features XB1 stands to gain from DX12, there shouldn't be an area for improvement in PS4. If SMS are using GNMX though, that might explain things. GNM perhaps supports the DX12 features like multiple command buffers but GNMX, like DX11, doesn't.

I think I speak for all off us when I add :confused:
 
It's confusing as I said much earlier in this thread I did touch on exactly that, I had ample evidence that it did support multithreaded command buffer submission when looking at the naughty dog slides.

But honestly it is the only thing I think XBO is missing or has to gain going to 12. I could be wrong of course perhaps the Xbox SDK has changed drastically enough that there is nothing to gain, and that would pretty much mean we should ignore Ian's comment and business continues as usual
 
DX12 Acc. retweeted this a couple days ago:

c4e25e-1432824177.jpg



https://twitter.com/DirectX12
 
Ha-ha. DX12 was not even in plans when xbone was made.
And DX12 mostly leverages on GCN specs like Mantle. There are even some copy-paste from Mantle in docs.

http://i169.photobucket.com/albums/u233/Rocketrod6a/Mantle vs DX12 pgm guide.jpg
it was in works for quite some years. But mantle had some influence on it. But the GPU-features were already planned (for the most part). Those things don't happen over night. There are more things in DX12 than just reducing the CPU-Overhead (what mantle does). AMD just used mantle to get a little bit ahead of its competition ... well or at lease get on par with its competition
 
I have spent a week optimizing a 40 instruction (ALU bound) shader, and in the end I got rid of a single instruction from it (2.25% gain for 2 millisecond period). It was worth it. Console graphics programming has never been about what you can do. That 16.6 millisecond time slice tells you what you can do, you just try to cram in as much as possible and cut out the rest.

That said. 30% total improvement is huge. Especially for 60 fps games.

However, in every generation of consoles, algorithmic improvements have improved the perceived graphics quality (you could call it "performance") by more than the hardware improvement. This generation is no exception.
Thanks for this, the fruits of your ideas will pay off.

FPS is a creative decision, correct? It depends on the type of games you make, the reason for the frames per second are a creative choice by the developers. They usually say taht slower frames per second (30 FPS) are there for a more emotional connection with the character & the animation. More like cinema.

To me, faster frames per second (60 fps) is more realistic, more true to life. Life runs at a very soft amount of FPS, I'd say 70 or 80 fps. I may enjoy some 30 fps games -TW3, Skyrim, RDR-, although 60 frames per second exist for the creation of some really fine games

DX12 will be ready 2 years after xbone launch. Of course they had enough time to see how much clusterfuck xbone is.
:) only DirectX 12 could cause so much stinging.
 
I have problems with definitions in this thread.
Hardware have certain performance, we can always set it as 100%, i.e. 100% utilization of hardware is 100% of the performance extracted.
But to extract that performance you need to hand-code everything you send to the hardware, we can always set it as 0% productivity.
Then we have an API, API reduces the hardware performance to something below 100% and in return it gives you an ability to develop code faster and easier, that's the ONLY purpose of an API, i.e. you gan >0% productivity for <100% of performance.
The better the API is the more productivity you gain for less performance loss.
But it's always a loss, there are (even theoretically) no APIs that can increase hardware performance to >100%, by definition.
So, the main theme here is: "how DX12 makes XB1 suck less compared to DX11".
And it's pointless to compare it to PS4, because their API is very low level already, i.e. 1% productivity for 99% of performance, therefore there is nothing you can improve there, performance-wise.

P.S. I'm talking here about API, not the games that use the API, because obviously if your team has not enough brains/money/time to use 1%/99% API you will probably make a better performing game using 50%/50% API.
 
Ha-ha. DX12 was not even in plans when xbone was made.
And DX12 mostly leverages on GCN specs like Mantle. There are even some copy-paste from Mantle in docs.

http://i169.photobucket.com/albums/u233/Rocketrod6a/Mantle vs DX12 pgm guide.jpg

I seen many throw this assertion out there. And it seems to be a rather easy narrative to paint especially when no one is talking (MS, AMD or Nvidia). But one thing seems odd to me.

Doesn't MS producing a low level api especially a general api compatible with a broad swath of hardware require cooperation from both NVidia and AMD?

Seems like to me Nvidia dragging its feet would be the more likely source of the delay of the development and release of DX 12.

Mantle should have scared Nvidia far more than it should have scared MS.
 
Last edited:
Back
Top