DirectX 12: The future of it within the console gaming space (specifically the XB1)

What does a CPU need to be DX12 compliant?!
Intel can create a business partnership with Microsoft so through some obtuse implementation only newer intel chips are supported. This will remove the chance people with older chips could get more longevity out of them (eg core2 quads/ nehalem based i3/i5/i7).

Intel needs some mechanism to get people to upgrade as performance improvements has slowed and the need to upgrade has slowed. This partnership that locks core2/nehalem people out from DX12 is great for intel. Expect some intel spokes people and MS DX 12 developers to say for "technical" reasons that those chips are not supported. This is great for consumers! :D

/the smiley is sarcasm
 
Page 39 is about resource binding support (tiers 1-3). I wonder who is tier 1 and tier 2? GCN is obviously tier 3 since it is always bindless. In GCN, the shader scalar registers (SGPRs) hold resource descriptors. There's no global state for shader resources (each wave could be running separate code and accessing fully separate set of resources). Tier 1 numbers seem to be high enough for most purposes. The only exception being fully bindless rendering, where you setup all the resources at once (level load time or once per frame) to a single big array and do dynamic indexing to the resource descriptor array in the shader code.
 
Don’t take my word for it! Install the Windows 10 Preview and run dxdiag.exe, then look for the DirectX Version Info at the bottom of the System tab

dx12-dxdiag-100564701-large.png


http://www.pcworld.com/article/2875...-into-windows-10-but-you-cant-use-it-yet.html
 
Phil never explicitly says feature level 12_0 in any tweet or interview though.
 
It's likely requirements for the integrated GPU in these processors. If you have a discrete GPU you can ignore the above.

Microsoft emphasis on integrated GPU requirements is telling. Discrete GPU marketshare will (sadly) continue to diminish in irrelevance

Cheers
Can devs use the internal GPU (either on Intel or AMD) in tandem with dGPU and leverage the iGPU for compute (non graphic) purpose only? I know that right now this is possible, but would DX12 accelerate this kind of scenario? Preferably you can mix and match iGPU and dGPU, especially since Intel doesn't have dGPU product.

Edit: my bad, I didn't noticed this is the console forum. Maybe someone can give a quick answer or move it to the appropriate forum/thread.
 
Last edited:
Can devs use the internal GPU (either on Intel or AMD) in tandem with dGPU and leverage the iGPU for compute (non graphic) purpose only? I know that right now this is possible, but would DX12 accelerate this kind of scenario? Preferably you can mix and match iGPU and dGPU, especially since Intel doesn't have dGPU product.

Edit: my bad, I didn't noticed this is the console forum. Maybe someone can give a quick answer or move it to the appropriate forum/thread.
that is something where async compute can be really good for the PC. But not for consoles.
Still at least this is how those unused igpus can be used a bit.
 
OIT to Volumetric Shadow Mapping, 101 Uses for Raster Ordered Views using DirectX 12 (Presented by Intel)

One of the new features of DirectX 12 is Raster Ordered Views. This adds Ordering back into Unordered Access Views removing race conditions within a pixel shader when multiple in flight pixels write to the same XY screen coordinates. This allows algorithms that previously required link lists of pixel data to be efficiently processed in bounded memory. The talk shows how everything from Order Independent Transparency to Volumetric shadow mapping and even post processing can benefit from using Raster Ordered Views to provide efficient and more importantly robust solutions suitable for real time games. The session uses a mixture of real world examples of where these algorithms have already been implemented in games and forward looking research to show some of the exciting possibilities that open up with this ability coming to DirectX

http://schedule.gdconf.com/session/...red-views-using-directx-12-presented-by-intel
 
the hardware detects when pixel exports conflict and serializes

I don't quite understand. If we are talking about modern GPU it is just executing some code and reading/writing some memory. Where in that picture we should fit the serializtion hardware? In memory controller?
 
I'm not sure how Intel implemented it, but the description of it seems to indicate that once a program is subject to pixelsync, the GPU front end will be forced to not launch a fragment program if the pixels it writes to are currently the output target of an already active batch.
If there is no overlap or the extension is not invoked, the GPU will freely launch shaders even if it turns out that they are going to write to the same output location, and it's not obligated to make sure that they write to those locations in the order they were launched.
 
edit: I was wrong.

Note: a lot of this is basic but for me this was new.

This isn't anything concrete so it's probably full of holes but it just occurred to me that there is a method to prove that Xbox one has some dx12/Maxwell features. I didn't realize this but the ones pushing for voxel global illumination are Nvidia. They've been trying to sell it since 2012 and have had difficulty getting adoption due to the hardware requirements.

To really push it they've made it hardware accelerated and they have been working closely with E4 team. (edit: It was Epic that created SVOGI according to some documentation that I'm reading, a technique that leveraged the work of an Nvidia engineer's thesis. I guess it just makes sense to credit them both.)

From what I understand of VXGI the algorithm can be hardware accelerated with the following: from Anandtech
The forthcoming Direct3D 11.3 features, which will form the basis (but not entirety) of what’s expected to be feature level 11_3, are Rasterizer Ordered Views, Typed UAV Load, Volume Tiled Resources, and Conservative Rasterization. Maxwell 2 will offer full support for these forthcoming features, and of these features the inclusion of volume tiled resources and conservative rasterization is seen as being especially important by NVIDIA, particularly since NVIDIA is building further technologies off of them.
. So VTRs are important to voxel based implementations as is conservative rasterization.
Iike VTR, voxels play a big part here as conservative rasterization can be used to build a voxel. However it also has use cases in more accurate tiling and even collision detection. This feature is technically possible in existing hardware, but the performance of such an implementation would be very low as it’s essentially a workaround for the lack of necessary support in the rasterizers. By implementing conservative rasterization directly in hardware, Maxwell 2 will be able to perform the task far more quickly, which is necessary to make the resulting algorithms built on top of this functionality fast enough to be usable.

It's apparent that there is a very close relationship between Maxwell's feature set, what we find in dx12 and ultimately in Unreal Engine 4.

UE4 supports VXGI, and Fable Legends has fairly great GI then I think it makes sense that it's running VXGI and not some custom Lionhead variant. If we agree to that statement I don't think it's far fetched to believe that XBO has at least the feature set that has been announced already for DX12. Typed UAV, ROV, conservative rasterization and volume tiled resources.

The reason I say this is because in 2012 when nvidia was demoing VXGI at their conference they were only able to get 16 FPS at full HD resolution in a tech demo. That hardware should have been a GeForce 680 at the time IIRC. And that has more juice than the Xbox One.

So I'm thinking with XBO being able to play Fable Legends at 30fps @ 900p there must be some additional assistance in creating that performance. It must contain similar features to maxwell 2.

In this article here http://blogs.nvidia.com/blog/2014/09/19/maxwell-and-dx12-delivered/ it reads like the dx12 pre-release back in March 2014 of GDC was actually more of ironing out bugs in the API and inefficiencies for games that wanted to target dx12. Not a discussion about feature set. That feature set, that they have been working with nvidia with for such a long time was likely finalized a long time ago.

TLDR; I think XBO has it. Whether it has even more is questionable, but of what they have announced thus far; XBO is capable.
 
Last edited:
Lionhead made their own GI and contributed it back into UE4.
oh shit. Seriously? LOL.

Well there goes my post.

If that's the case I am thinking that it was going to follow Voxel Cone Tracing via PRT/TR. Similar to how PS4 would need to accomplish it without the additional Maxwell 2 hardware features. http://www.neogaf.com/forum/showthread.php?t=656961

But if Lionhead made their own GI without any sort of tracing, then it's all blah. I know how to edit my above post: I was wrong.
 
Back
Top