Would be hilarious if it's because of ARM.Why on earth would they decide that? uhhh
Would be hilarious if it's because of ARM.Why on earth would they decide that? uhhh
The Road to Replacing DXIL
As we look to the future, maintaining a proprietary IR format (even one based on an open-source project) is counter to our commitments to open technologies, so Shader Model 7.0 will adopt SPIR-V as its interchange format. Over the next few years, we will be working to define a SPIR-V environment for Direct3D, and a set of SPIR-V extensions to support all of Direct3D’s current and future shader programming features through SPIR-V.
Appendix: A Brief History of GPU Interchange Formats
LLVM’s bitcode format had some significant drawbacks. Notably it is not version stable. New versions of LLVM support a lossy upgrading of older LLVM IR modules, but new LLVM cannot write IR modules that can be read by older versions of LLVM. Additionally, LLVM bitcode is a bit-packed file format which has two big drawbacks (1) it compresses poorly, and (2) it is hard to read or write from tools that aren’t LLVM.
To solve these problems The Khronos Group developed SPIR-V as a successor to SPIR. SPIR-V is ideologically aligned with LLVM’s IR, but it supports a stable and simple binary serialization. This makes SPIR-V easy to read and write by simpler tools than LLVM IR.
Doesn't really affect any of that kind of thing. In the long run it mainly makes it easier to write portable GPU code across platforms and use a shared set of tools so there are ideally fewer platform-specific surprises on that front. Right now there's a lot of nonsense making sure shaders end up compiling well (or at all) for various platforms and dodging various toolchain bugs without messing up another toolchain.So does this mean that moving forwards we can expect PC's to be a little more console like in terms of the APIs ability to access lower level or unique features of a particular GPU's architecture?
That aspect is definitely interesting. If a change this big doesn't make Microsoft declare DirectX 13, then there might never be a DirectX 13.With how that's phrased are they also saying that Shader Model 7 will be released under Direct X 12?
That aspect is definitely interesting. If a change this big doesn't make Microsoft declare DirectX 13, then there might never be a DirectX 13.
The principle of charity requires taking serious claims in the best possible light. This should have yielded robust, powerful ExecuteIndirect benchmark usage (and even base compute/mesh shader usage) to provide competitive benchmarks against Work Graph functionality. At the time of writing, those benchmarks have yet to materialize, and the only test cases are closer to strawmen that can be held up for an easy victory.
- Across the board, Work Graph performance is not very exciting
- Emulation with core Vulkan compute shader features is up to 3x faster
- Comparison test cases against ExecuteIndirect (which show EI being worse) do not effectively leverage that functionality, as noted by Hans-Kristian nearly six months ago
I’m not saying that Work Graphs are inherently bad.
Yet.
At this point, however, I haven’t seen compelling evidence which validates the hype surrounding the tech. I haven’t seen great benchmarks and demos. Maybe it’s a combination of that and still-improving driver support. Maybe it’s as-yet available functionality awaiting future hardware. In any case, I haven’t seen a strong, fact-based technical argument which proves, beyond a doubt, that this is the future of graphics.
Before anyone else tries to jump on the Work Graph hype train, I think we owe it to ourselves to thoroughly interrogate this new paradigm and make sure it provides the value that everyone expects.
Some really interesting links in this post!An interesting blog post that pours some cold water on Work Graphs
That blog post glosses over one of the biggest wins in favour of Work Graphs which was the compute raster sample from AMD. (1.7ms w/ WG vs 2.9ms w/ EI)An interesting blog post that pours some cold water on Work Graphs