DirectX 12: The future of it within the console gaming space (specifically the XB1)

Filing this one here, could very well be applicable to ESRAM thread as well, but I'll link that article to the esram thread.

CD Projekt Red: DirectX 12 Won’t Help Xbox One With 1080p Issue, But GPU Might be Able to Handle More Triangles



Full article in link below

http://wccftech.com/cd-projekt-red-directx-12-xbox-1080p-issue-gpu-handle-triangles/

He goes on to talk about how devs will need to develop new techniques for the shading pipeline to improve performance. Sounds like what sebbi was talking about with future use of compute shaders. Ofcourse this will apply to both consoles not just X1 and Dx12. I do remember reading an article about Ryse where Crytek stated they used one compute unit for all shading and culling. Im sure the other 11 cus didnt go unused but I wonder what sort of overhead was left in Ryse?
 
He goes on to talk about how devs will need to develop new techniques for the shading pipeline to improve performance. Sounds like what sebbi was talking about with future use of compute shaders. Ofcourse this will apply to both consoles not just X1 and Dx12. I do remember reading an article about Ryse where Crytek stated they used one compute unit for all shading and culling. Im sure the other 11 cus didnt go unused but I wonder what sort of overhead was left in Ryse?

one cu? thats remarkable.
 
slightly OT: but from when DirectX12 was announced to say holiday 2015, does anyone know what exactly they are doing with the API (from a process perspective)? It's still under NDA, and I'm just curious as to what might change during this beta period than the obvious (which is ensuring card compliance) ? It's curious for me because I don't ever recall such a beta period for DirectX 11, or any of the .variants of it.

I understand that it's available to developers that have applied for their directx12 license, but if you aren't making a directx12 game you can't really assist in helping develop/test it right? or can one actually just look at the API and be able to spot potential issues?
 
He goes on to talk about how devs will need to develop new techniques for the shading pipeline to improve performance. Sounds like what sebbi was talking about with future use of compute shaders. Ofcourse this will apply to both consoles not just X1 and Dx12. I do remember reading an article about Ryse where Crytek stated they used one compute unit for all shading and culling. Im sure the other 11 cus didnt go unused but I wonder what sort of overhead was left in Ryse?

Question is:
Ryse is 900p 30 fps (barely)... Limited by the amount of esram.
Unfortunatly the use of CUs for compute Shaders or other GPGPU process requires fast RAM, and the Xbox One has limited availability since it is already limited for the current graphics framebuffer.

As such, with the use of GPGPU, esram availability may be a bigger problem in the future, and I see this observation from Projekt Red as almost a fact. Tiling is of course the solution, but with current development costs how many will use it?
 
Tile based method should still have performance advantages regardless of the hardware setup, I think Frostbite is already using it?
 
Hypothetical time.

If DirectX12 was applied in an update. Would uncapped games released pre-update get better performance? Or does each game need to be updated to take advantage of directX optimizations.
 
Question is:
Ryse is 900p 30 fps (barely)... Limited by the amount of esram.
Unfortunatly the use of CUs for compute Shaders or other GPGPU process requires fast RAM, and the Xbox One has limited availability since it is already limited for the current graphics framebuffer.

As such, with the use of GPGPU, esram availability may be a bigger problem in the future, and I see this observation from Projekt Red as almost a fact. Tiling is of course the solution, but with current development costs how many will use it?

Ryse was also a launch, first-wave game that also recently won the SIGGRAPH Award 2014 for Best Real-Time Graphics. Developers are slowly starting to learn new techniques to use the eSRAM more properly. It's rather ironic that people kept saying 900p/30fps of Ryse was bad, and it yet it had some of the best visual quality and special effects from all of the launch games.

Tiling is the future solution. You can't create higher quality textures (i.e. larger size textures) without using tiling as even 8 GB is not enough. It's why middleware also support tiling because it's THE future. It's why there's now hardware-based PRT and Tiled Resources.

Lastly, no one so far has mentioned anything about an interview with Battlefield Hardline on eSRAM and DX12 four days prior to the Witcher interview. Why does Battlefield and Witcher have opposing view of DX12? http://gamingbolt.com/battlefield-hardline-interview-we-want-to-make-sure-that-we-deliver-on-peoples-expectations

Leonid Melikhov: Have you had problems with the Xbox One’s eSRAM?

Ian Milham: No, absolutely not. The fact that we partnered for Sony for the beta we’re doing has nothing to do with what our development is on the Xbox One. All of our development on the Xbox One has been going great as well and it looks great on that platform.

Leonid Melikhov: How do you guys feel about DirectX 12?

Ian Milahm: They’re cooking up some pretty crazy stuff – the Frostbite team with the latest things. It’s always pretty impressive, we are going to try to get in every feature we can before we ship
 
DirectX 12: The future of it within the console gaming space (specifically th...

Tile based method should still have performance advantages regardless of the hardware setup, I think Frostbite is already using it?


Frostbite 3 does indeed use tiled deferred.


Bf4 runs at 720p, though PVZ is 900p
 
Last edited by a moderator:
And NFS:Rivals @1080P/30fps on both consoles...

Yes and no. XB1 had some compromises on reaching that goal though. Lack of HBAO, BDoF, shadow precision is lower, etc... when compared to the PS4 version.
 
Last edited by a moderator:
Question is:
Ryse is 900p 30 fps (barely)... Limited by the amount of esram.
Unfortunatly the use of CUs for compute Shaders or other GPGPU process requires fast RAM, and the Xbox One has limited availability since it is already limited for the current graphics framebuffer.

As such, with the use of GPGPU, esram availability may be a bigger problem in the future, and I see this observation from Projekt Red as almost a fact. Tiling is of course the solution, but with current development costs how many will use it?

Im no dev or programmer. Im just a fan of graphics and games.
It all depends on whose opinion you listen to. Esram is not only used for the framebuffer.
Some devs say the most efficient way to use the esram is to put the most important or most used part of the franebuffer into esram and use the main ram for everything else.
Tiling is an option and devs are used to using it from expeirence with the xbox360.
As far as memory for Gpgpu goes someone else might can answer that better but Im pretty sure devs could choose between using esram and main ram. People are always looking at the esram as just 32mb of small cache. The most important thing the esram provides is high bandwidth. Devs arent stuck with only using 32mb per second if they where the bandwidth would never reach 150gbs. My statement was only intended to point out the part of the quote that was left out of the original post.
 
You are confusing 2 different things. Tiling as referring to fitting things in memory is different than tile based deferred rendering.
 
Yes and no. XB1 had some compromises on reaching that goal though. Lack of HBAO, BDoF, shadow precision is lower, etc... when compared to the PS4 version.

Doesn't the PS4 version also run faster than 30fps and have awful stuttering as a result.
 
Doesn't the PS4 version also run faster than 30fps and have awful stuttering as a result.

I believe all three versions suffered from frame pacing issues. I own the PC version... even unlocking the 60fps from the game's executable, it still has some weird hiccups and frame pacing issues. Not sure if it got resolved (patched), haven't played the game in quite some time.
 
If DirectX12 was applied in an update. Would uncapped games released pre-update get better performance? Or does each game need to be updated to take advantage of directX optimizations.
Developers of course need to port their games to the newest DirectX version, if they want to use it. It's always been like this (DX9 games didn't automatically upgrade to DX10 pr DX11 either). DirectX 12 API is quite different from DirectX 10/11. It's lower level (manual resource management and syncronization) to make it faster.
 
Im no dev or programmer. Im just a fan of graphics and games.
It all depends on whose opinion you listen to. Esram is not only used for the framebuffer.
Some devs say the most efficient way to use the esram is to put the most important or most used part of the franebuffer into esram and use the main ram for everything else.
Tiling is an option and devs are used to using it from expeirence with the xbox360.
As far as memory for Gpgpu goes someone else might can answer that better but Im pretty sure devs could choose between using esram and main ram. People are always looking at the esram as just 32mb of small cache. The most important thing the esram provides is high bandwidth. Devs arent stuck with only using 32mb per second if they where the bandwidth would never reach 150gbs. My statement was only intended to point out the part of the quote that was left out of the original post.

I am no programmer or Dev either. But according to this, Deferred rendering, as used on Ryse, uses 28 bytes per pixels for the G-Buffer creation.

As such there is just enough memory for... guess what... 792p.

Ryse uses several techniques to compress data, and manages to decrease that usage (something also mentioned on the previous link that states that memory usage can be decreased to 20-24 bytes per pixel).

But even so that allows only 900p on those 32 MB.
 
Back
Top