The problem is that RT is a massively parallel problem by its nature, so a massively parallel compute structure (read: GPU), already well suited to be a hardware solution to that. Is not hardware RT because those compute units do other things well? Can they only accelerate RT to be considered dedicated? What about ImgTec’s solution that isn’t full RT, does it count? What if GPU architecture evolves to where a generic compute unit does RT better? What’s the magic threshold to call it a hardware solution?
We've already had this discussion earlier in this thread. As I linked
here even Microsoft defines DXR/RT as a purely compute workload that doesn't need/require specialized HW blocks. So the logic would be that HW RT is RT done on the GPU while SW RT is when it's done on the CPU (example Nvidia Iray running on CPU via SSE2 instructions vs running on CUDA GPUs or Radeon Pro Renderer or Cylces running on GPU vs running the same code on the CPU)..pretty straightforward...until people make up new conventions where, magically, HW RT is only when you have RT Cores/Specialized HW blocks for RT...so HW RT is now only "possible" on Turing GPUs with RT Cores ..apparently… Anyway...water is wet... or not..nobody knows anymore..
I agree entirely that it's the performance that matters (well, that and area and power, but the three are pretty closely related). And if a software solution using general purpose compute ends up being more flexible and adaptable (than for example hardware accelerated geometry intersection only) then clearly going more general could be better.
It's just the semantic argument about what constitutes the "hardware" prefix. It's almost always been used to refer to dedicated or specialised hardware. For example dedicated video decode / encode blocks are "hardware"; ROPs that can perform MSAA operations have hardware support (you can do exactly the same thing in a shader, all completely hidden behind the API); vector units on CPUs (you can perform vector operations just fine without them).
It's not magical, it's about whether the hardware has blocks or instructions or features designed specifically or at the very, very least primarily with supporting (normally meaning, 'accelerating' or 'saving power while doing') a particular thing.
In the context of graphics processing and GPUs, pixel shaders and compute are the bread and butter and general case. If you want to claim "hardware support" for a subsection of what they can do, I really think you need to be looking for dedicated hardware or modifications made
specifically or primarily for that particular subsection of things that shader and compute capable processing units are already capable of doing.
But having dedicated hardware for a particular, definitive use case isn't always the best way to go, especially long term. If PS5 is only 'okay' at RT but 'brilliant' overall I don't think you can hold that against Cerny and Sony.
But yeah, it's semantics. I think we're all coming from the same place in terms of performance and flexibility.