DirectX 12: The future of it within the console gaming space (specifically the XB1)

Maybe (hopefully) DX12 will get back to one clearly defined feature set. It sounds like talk started a long time ago and implementation is in its first of about three years.

That would be great, but they've already mentioned how they want it to run on existing hardware but also support new hardware features. Personally I'm totally fine with feature levels, but I suppose in practice they have a hard time fitting a wide array of hardware into such coarse buckets of functionality.
 
That would be great, but they've already mentioned how they want it to run on existing hardware but also support new hardware features. Personally I'm totally fine with feature levels, but I suppose in practice they have a hard time fitting a wide array of hardware into such coarse buckets of functionality.

Not to mention that it would mean endless compromises on what the API will actually feature, IIRC for example DX10, when we got rid of the "cap bits", there were supposed to be more features included than what we finally got, because NV didn't support some planned features on G80. Now with feature levels we can have all those features made available even if one vendor doesn't support some things (like feature levels 11_0 vs 11_1)
 
Microsoft's Build Conference starts tomorrow and there seems to be one session dedicated for DirectX 12:

Direct3D 12 API Preview
Come learn how future changes to Direct3D will enable next generation games to run faster than ever before! In this session we will discuss future improvements in Direct3D that will allow developers an unprecedented level of hardware control and reduced CPU rendering overhead across a broad ecosystem of hardware. If you use cutting-edge 3D graphics in your games, middleware, or engines and want to efficiently build rich and immersive visuals, you don't want to miss this talk.
 
I strongly suspect that it would be something where you have to decompress to memory first, as opposed to doing it on-the-fly during texture sampling. Block-compressed formats are specifically designed to be fast and easy to decode, while JPEG is not.

Correct. JPEG has no random access ability in the ISO definition, to get the last pixel you have to decode the whole serial bitstream.
Albeit, it would be possible to do fixed rate coding on DCT blocks in general, but's not called JPEG, and I my guess it'd really suck quality wise. :)
 
1c1a66-1396600798.jpg


abc8cf-1396601372.jpg

D3D12 API Preview slides:

http://video.ch9.ms/sessions/build/2014/3-564.pptx
 
Last edited by a moderator:
That article sounds like a bunch of hyperbolic crap. It's highly doubtful the Bone will get a 2X speedup from D3D12, especially if D3D12 is based on the Xbox API. Unless of course, MS really screwed the pooch with the XBone API.
 
The author blatently hasn't got the first clue
That article sounds like a bunch of hyperbolic crap
You cannot directly translate CPU ultilization improvements to framerate improvements, that could only happen with a very slow CPU running a very fast GPU.

In reality the new API and driver model would free some additional CPU time for the developer. Those extra 3-5 ms per frame can be used to perform more AI tasks, to batch more workload to the GPU and better saturate the execution units, or just to get the same 60 fps on your TV (but in a more efficient way :) ) But it cannot magically make your GPU perform twice as fast and exceed its theoretical maximum performance. You will still be limited by the GPU in graphics-heavy workloads.
 
You cannot directly translate CPU ultilization improvements to framerate improvements, that could only happen with a very slow CPU running a very fast GPU.
Yup Mantle has always seemed to me like the ATI answer to the low IPC throughput of AMD cpus to enable Kaveri and their future APU designs punch above their weight. It's a good thing that we're getting standards based solutions to this though rather than a proprietary API (CUDA how are ya).
 
Does "conservative rasterisation" really require new hardware? From what I understand, this can be implemented at the driver level in the current architectures, since the GPU is basically a general-purpose many-cores wavefront processor where the driver handles the native code that runs on the wavefronts. You would need to alter the algorithms for rasterisation or setup, but it doesn't require new instruction for operand swizzle, more physical registers or vritual memory descriptor tables, as in other level 11_1 features.
 
Last edited by a moderator:
That article sounds like a bunch of hyperbolic crap. It's highly doubtful the Bone will get a 2X speedup from D3D12, especially if D3D12 is based on the Xbox API. Unless of course, MS really screwed the pooch with the XBone API.
Bet me to the punch.
But if in a year if xbone titles run faster than ps4 I will admit being wrong. Hell even today if he can prove that.
Until then this guy should be ignored
 
The author blatently hasn't got the first clue what he's talking about. Either that or he's 3 days late for April Fools.

That is Brad Wardell so you have to assume that he knows exactly what he is saying. Brad is no MS fanboy given how long he avoided working in windows.
 
That is Brad Wardell so you have to assume that he knows exactly what he is saying. Brad is no MS fanboy given how long he avoided working in windows.
Again, it doesn't matter who's saying it. There's a couple of fundamental console engineering issues here that make that assertion nigh impossible. 1) For DX12 to provide XB1 with a 2x speed increase, its current API must be horrible inefficient and pretty crippled. We know that some of the aspects of DX12 are already present in XB1, so the likelihood of this is virtually nil. 2) Should XB1 become twice as fast, going by multiplat games the hardware will be equal or exceed PS4 in performance, meaning 12 CUs working faster than 18 CUs by way of an API and some ESRAM. Again, chances are basically nil for that happening.

I suppose, playing devil's advocate, Sony's API could be as gimped as XB1's and similarly reducing PS4's hardware to half its performance, and then when DX12 flies in to save the day, XB1 will be unlocked and reaching its full potential which PS4 can't hope to achieve. Realistically though, it's bunk. Consoles have thin APIs (maybe not XB1 with it's 3 flavour Windows base??) that let the hardware run full tilt. People shouldn't be looking for massive speed-ups. Optimisations, sure, but anyone claiming 2x the performance is either being taken out of context (talking about one single aspect being twice as fast), or babbling like a lunatic.
 
Does "conservative rasterisation" really require new hardware? From what I understand, this can be implemented at the driver level in the current architectures, since the GPU is basically a general-purpose many-cores wavefront processor where the driver handles the native code that runs on the wavefronts. You would need to alter the algorithms for rasterisation or setup, but it doesn't require new instruction for operand swizzle, more physical registers or vritual memory descriptor tables, as in other level 11_1 features.

There are still a handful of fixed-function hardware units that handle clipping, scan conversion, depth testing, and spinning up pixel shaders.
 
Again, it doesn't matter who's saying it. There's a couple of fundamental console engineering issues here that make that assertion nigh impossible. 1) For DX12 to provide XB1 with a 2x speed increase, its current API must be horrible inefficient and pretty crippled. We know that some of the aspects of DX12 are already present in XB1, so the likelihood of this is virtually nil. 2) Should XB1 become twice as fast, going by multiplat games the hardware will be equal or exceed PS4 in performance, meaning 12 CUs working faster than 18 CUs by way of an API and some ESRAM. Again, chances are basically nil for that happening.

I suppose, playing devil's advocate, Sony's API could be as gimped as XB1's and similarly reducing PS4's hardware to half its performance, and then when DX12 flies in to save the day, XB1 will be unlocked and reaching its full potential which PS4 can't hope to achieve. Realistically though, it's bunk. Consoles have thin APIs (maybe not XB1 with it's 3 flavour Windows base??) that let the hardware run full tilt. People shouldn't be looking for massive speed-ups. Optimisations, sure, but anyone claiming 2x the performance is either being taken out of context (talking about one single aspect being twice as fast), or babbling like a lunatic.

He seems to believe that the gains are found on the CPU side, from better threading and efficiency gains in the D3D API. I don't buy it.
 
He seems to believe that the gains are found on the CPU side, from better threading and efficiency gains in the D3D API. I don't buy it.

Yup I can't see a 100% performance boost on XB1 from DX12 unless MS gave the XB1 API development to 2 interns with a kegger and said 'have at it, your deadline is tomorrow by 5'.
 
Back
Top