DegustatoR
Legend
This has already been answered. All modern GPUs have RT h/w in them.It is about asking what's better: Fast hardware solutions with problematic limitations, or tailored software solutions without them?
This has already been answered. All modern GPUs have RT h/w in them.It is about asking what's better: Fast hardware solutions with problematic limitations, or tailored software solutions without them?
Well that’s the thing. We need tangible evidence that flexibility produces better results today than the “limited and inefficient” DXR guiderails. Until then it’s just wishful thinking. If AMD doesn’t provide that evidence then yes we can only hope someone else does.
My view on this is simple. We already have extremely flexible RT implementations (x86, CUDA) so the experts clearly understand the trade offs of software vs hardware RT. If greater flexibility was the best option today then DXR would have been designed with that in mind.
Lol I would hardly give AMD’s documentation credit for the incredible imagination, talent and hard work of console devs and artists.
Well that’s the thing. We need tangible evidence that flexibility produces better results today than the “limited and inefficient” DXR guiderails. Until then it’s just wishful thinking. If AMD doesn’t provide that evidence then yes we can only hope someone else does.
My view on this is simple. We already have extremely flexible RT implementations (x86, CUDA) so the experts clearly understand the trade offs of software vs hardware RT. If greater flexibility was the best option today then DXR would have been designed with that in mind.
Close to the metal is a big deal in console world. AMD's documentation supports that.Lol I would hardly give AMD’s documentation credit for the incredible imagination, talent and hard work of console devs and artists.
I think this will come from title designed around next generation consoles.
Ampere is twice as fast with compute performance.
You're taking a whole lot of conclusions over very little information.Flexibility helps you work smarter by doing less work for the same result or to do things that are simply impossible due to constraints of hardware solutions. I suspect there isn’t much room to do the former as DXR seems to provide decent enough support for skipping work where it’s not needed.
The real benefits would be in doing even more advanced RT with more complex data structures and more detailed geometry. But that would be dog slow on today’s compute hardware anyway so it’s a moot point.
This has already been answered. All modern GPUs have RT h/w in them.
Brute force, (e.g. "Psycho" setting in Cyberpunk 2077) which seems to be what we're seeing in existing games, is upended by bandwidth. We've gone from 0 ray tracing to 11 in about 2 years and there's no more bandwidth on the horizon.If modern means chips introduced this very year.
The trend is such indeed, but the interesting question is how far will it reach.
You're taking a whole lot of conclusions over very little information.
Why is more flexibility only useful for more complex data structures?
This isn't the scientific compute market where effective throughput of a certain type of calculations is the determining factor. What matters is how power/cost effective an architecture is at producing XYZ visual results
I've seen time and again developers claiming that nvidia's greatest problem with their RT approach is that it's "too precise" for real-time raytracing.
AMD's approach could simply be an answer to that, and it proposes to do less but smarter.
This answer is much more general than my question.This has already been answered. All modern GPUs have RT h/w in them.
That's the big question. Ofc. it seems missing traversal HW is a lack, and it explains why NV is faster in current games.But what about performances ? You need a nice balance between speed and flexibility. Being flexible, but with very low performances, or, kind or worse, no way to tape in this flexibility (hello DXR ?) is not a good thing...
What's "FF" about a MIMD processor?NV has complete FF implementation of classical RT, including fixed BVH structure and traversal.
Which is what NV also has and Intel will likely have too.AMD only has intersection of boxes and triangles.
Traversal is entirely compute everywhere, the difference is in what this compute is running on and what is exposed to APIs and developers.Traversal is seemingly entirely compute
It's off by so much that it doesn't make sense at all. What would you call DXR running on Pascals?Because of that i may have called AMD 'compute RT', and NV 'fixed function', which ofc. is off but makes sense to me personally.
This has already been answered. All modern GPUs have RT h/w in them.
HW RT is a shiny dead end lure put up by Nvidia.
We're a LONG way from needing generational hw improvements before switching to a purely generalized compute processor without some form of hw acceleration for specific fixed functions.
Since the whole discussion has it backends from consoles, those 10TF machines are not going to provide enough horsepower to deliver next gen graphics and meaningfull ray tracing anyway. Like many techheads have said, it will be rather subtle since the hw isnt powerfull enough. For now, NVs solution makes sense in performance. Its seems like the 2001's xbox pixel and vertex shaders. Some more generations of GPUs and we might see more flexible solutions from NV.
Which is why triangle rt is, well, a dead end. Look at the performance it's getting. Last gen base games with a single raytracing effect on consoles, or bringing a $1200 gpu with the best tracing around to 30 fps, also on a limited title.
And it's not like it's "brand new" anymore. It's been around for 2 years now. Research has been done, best practices are getting established, and it's still a massive performance hog.
I am worried about RDNA2 consumer hardware, even without RT a 6900 fairs no better than a 6800(xt) in Cyberpunk (cache problems). And I love the idea of tracing. "It just works" is a great thing for devs to have once it's set up. But you can do tracing with specialized hw, the reason gpus have it at all is for "realtime" and api limits to do with cross vendor friendliness, or you can do tracing in compute. Tracing that's simd friendly, that's fully optimized across all vendors and platforms.
HW RT is a shiny dead end lure put up by Nvidia. Dreams can do traced global illumination on a PS4. There's no reason every dev can't do tracing with the same efficiency, no special hw needed. But they've been lured in by vague promises. Cyberpunk would've been much better off putting that R&D time elsewhere, it could've run and looked better on every platform. But it's only the beginning of this gen. And if triangle meshes are dead for primary rays (see UE5, Dreams, sebbie's project) in favor of compute, raster hw be damned, there's no reason incoherent secondary rays can't do the same thing, ray tracing hw be damned (see UE5, Claybook, Control, The Last of Us 2, etc. etc.)
Likely till the end of RDNA2 console generation so 2025'ish, with a possibility that PC h/w will do a lot more frame rendering with RT by the end of this period already. It is already happening with RTX games like WDL and CP2077.I disagree triangle RT is probably the long term future of rendering but in the next decade maybe we will have an intermediate step.
No? NVs RT cores do both traversal and intersection. I assume shader cores are completely free for other work and do not assist traversal.Traversal is entirely compute everywhere, the difference is in what this compute is running on and what is exposed to APIs and developers.