DmitryKo
Veteran
OK, thanks. So even if they could be bothered to implement it as some hack, that would probably be S L OW.There are still a handful of fixed-function hardware units that handle clipping, scan conversion, depth testing, and spinning up pixel shaders.
Since they strictly require D3D11-class hardware for Direct3D12, they do not have a "wide array of hardware" anymore and 9_x, 10_x and 11_x feature levels would not represent real hardware. These "down-levels" feature levels are only required for compatibility with your legacy game assets such as HLSL shader code, but they all would be running on D3D11 hardware.Personally I'm totally fine with feature levels, but I suppose in practice they have a hard time fitting a wide array of hardware into such coarse buckets of functionality.
Please explain how this is possible when it retains the same Direct3D 11 API?XB1 is not running DX11 it uses a bespoke API (commonly referred to as XB1 API). XB1 API is much closer to the tin than DX11
Microsoft and AMD might have a very optimized "driver" and runtime that is tailored to the GCN architecture, and they might have extended the API with bundles or other improvements, but from all we know the resource-management and memory-management aspects of the D3D11 API have not gone anywhere on the Xbox One, whatever level of optimization they were able to apply.
On the other hand, Direct3D12 completely goes away from any resource- or memory-management.
Yep, that's why they had to completely throw away any resource management in the driver which would require insane levels of inter-process synchronization, and instead devised a new model where resources are static and immutable and so access to them can be easily multithreaded.From the titanfall tech talk, there is evidence that dx11.x1 does not have any of the multithreaded draw call performance of mantle or d3d12.
Last edited by a moderator: