It's perfectly legal behavior. Do you know how to solve this problem on today's hardware? It has nothing to do with GCN's scaler unit. You solve this problem today by forcing each "thread" in a compute shader to run on the entire simd/vector unit (i.e. treat the vector unit as a scaler unit)...
I don't think this works the way you think this works. The feature you're describing for turing was more about making scheduling possible in various circumstances, not "reducing the cost of multiple function calls". In compute shaders, we have "threads" right? Well they aren't actually...
I mean...it's hard to argue against something that you pulled out of thin air. Can you cite specific examples of the DXR 1.0 API you feel are not portable performance wise? I don't find that to be DXR 1.0's limitation, but rather its awkwardness with integrating with existing engines...
Hey I want callable shaders just as much as you :wink:, but remember graphics hardware is like everything else in life, a zero sum game. To increase flexibility in one area we must lose some efficiency [relative performance] in another. IHVs are very sensitive to this (remember: bar graphs of...
This is exactly my point! How many GPUs support mesh shaders? Turing and RDNA2? MS is a bad guy for not supporting a feature that at this very moment only ONE architecture on ONE IHV supports? Really? You realize mesh shaders won't be fully utilized in game engines for like another 5 years...
But ask yourself why that is? Remember, 1.0's purpose was to make it easier for developers to hit acceptable levels of performance without needing to know how the underlining hardware worked. For instance, there is nothing in DXR that mandates the underlining acceleration structure be BVH. It...
I think we can do without the insults.
Oversimplification:
dxr 1.0 = driver does the ray scheduling/reordering
dxr 1.1 = developer does the ray scheduling/reordering
They both have pros and cons. 1.1 does not make 1.0 "obsolete". 1.1 is easier to integrate into existing engines (in fact...
Isn't that...good? :D How it preforms in relation to turing (or whatever) is meaningless.
You don't say :smile: https://www.gdcvault.com/play/1024656/Advanced-Graphics-Tech-Moving-to
Try not to think of it as "DX12 is teh suxxor" or "Vulkan is a GCN construct" and think of it more as "Wow...
Mine came today but it'll probably sit in a box for a week or two. I have gigabit fios and live in nyc so I imagine I'm best case scenario for stadia. Really have 0 desire to play destiny 2 though. :razz: When I get around to it I'll let the class know how it is.
What I'm claiming is all of that research is a drop in a bucket compared to the time/effort/resources that have been put into rasterization. I think it's reasonable to still expect big advancements in the future for ray tracing (and rasterization!). Let's look at the popular two level BVH...
I'm not saying it's never been tried! But let me rephrase it another way. I think one of the key aspects about imgtec's ray tracing hardware was they essentially had (incoming oversimplification) "hardware accelerated" BVH tree construction and searching...