They do have some extra access to hardware which Win/DX12 doesn't, IIRCLast and current generation.
They do have some extra access to hardware which Win/DX12 doesn't, IIRCLast and current generation.
Well yeah but then you have results where this "explicit control" is used in ways which are counter productive to the actual point of getting more performance out of the h/w.You can also find developers that think dx12 is better because you have more explicit control. And I’m talking devs working on major games
The problem is that with these new APIs developers should often be implementing things "in many ways" (i.e. differently for different h/w) - but they often don't because they only have resources to do it one way (barely) and this one way is usually derived directly from how stuff is done for console h/w.The api does not describe a singular correct way to do things. Many things can be implemented in many ways, some that will favour one architecture over another and some that will be terrible across the board.
Who will do that? Epic may do this for their renderer(s) because that's what they do for a living basically but other developers likely won't. Hence this thread and what we get in actual shipped s/w.I think developers will need to code DX12 games for RDNA and GCN, and DX12 separately for Nvidia GPUS. Just like they do with Xbox and PS.
I've wondered about this in the past. When DX10 came around, it was often slower than DX9 in games that supported both (Crysis was like this). Crysis was an early example, but I don't recall later games being much different in this respect. Eventually everyone dropped DX9 for DX10/11 and we could no longer make comparisons.The only thing we know for sure is that DX12 has sucked for years on all PC architectures. Pointing out one or two games where it works decently doesn’t change that fact. Soon it won’t matter as there wont be any DX11 fallback to compare against.
I don't think we're getting D3D13 any time soon. MS seems committed to updating D3D12, and there's a general lack of understanding (or agreement) in what should the next major API revision even be.Hopefully d3d13 will be something less verbose than vulkan.
Yeah, it's already happening for some years, most new releases use D3D12 (or Vulkan) exclusively. Mostly because supporting two renderers is very expensive, and D3D12 is the only API with new features. So there are few ways to compare them even now and there likely won't be any in the future.Eventually everyone dropped DX9 for DX10/11 and we could no longer make comparisons.
You can also find developers that think dx12 is better because you have more explicit control.
Vulkan and DX12 was initially slower in Unreal/Unity because their RHIs didn't match the retained mode grouping of Vulkan/DX12. Developers had to add hash maps under the RHI to map dynamic API to retained PSO and descriptor set APIs. Which added complexity and CPU cost a lot.
Persistence and caching are always a trade-off. If you require it, then the user has to manage the persistent data in some way. If their design is not 1:1 with your API design, then there will be hash maps and similar slow mapping data structures.
Also, if you need dynamic behavior, then Vulkan adds extra driver overhead to call various APIs to update the descriptors and especially recompile the PSOs.
I am talking about these issues in my forthcoming blog post. Higher performance obtained by persistent data is tricky.
Retained vs immediate mode always has these considerations. Both have advantages and disadvantages. Implementing retained mode as an optimization is tricky as it forces the user to deal with persistent data == caching. It's a win if data is mostly static, but a loss otherwise.
Another data point: GPU-driven rendering is retained mode. You have persistent scene data in GPU memory that you delta update from CPU. You need object persistence and change tracking. Well-optimized batched delta update is fast. But the whole system is quite complex.
I worked on a UE4 game (Claybook) when their DX12 backend was new. Unreal 4.18 DX12 backend was slower on both PC and Xbox versus their DX11 backend.
I also worked at Unity and DX12 was kept in beta for several years because DX11 was faster. My team optimized the DX12 backend.
Of course you can make a faster DX12 and Vulkan backend than DX11, but this turned out to be difficult for big engines. Their RHIs were designed around classic OpenGL/DX11 binding and render state management. Vulkan and DX11 fit poorly to RHIs like this.
And, HypeHype, I rewrote our RHI completely to make Vulkan fast. That's the way you do it. But this is a sign that Vulkan was not well designed to fit the current RHIs that engines use. It's not a good thing. You have to change the higher level code too to make Vulkan/DX12 fast.