In terms of design, which for years has been honing ever-more flexible core hardware that can be used to assist any part of the graphics or compute pipeline, to paradigm shift back a decade to bespoke single-purpose hardware that will see questionable use because this are processor cores only deployed on high-end hardware from one manufacturer, is I can fully see the opportunities augmented raytracing will bring, but not for a while.
I agree with you on a more unified general-purpose approach towards rendering (i.e., less latency, more efficient design, smaller possible shrinks, and of course, more developer friendly), without the need for specialized logic [cores] outside the primary rendering array. But the question becomes why did Nvidia go this route?
If we take Nvidia at their word; that Turing architecture was 10 years in the making (I know, possible PR fluff), somewhere in R&D they must of had multiple designs (possibly baked GPUs) for both designs (one with a unified approach with rasterization and RT, and the current Turing architecture). If the unified approach had problems, what possible roadblocks stopped Nvidia from going down that path? Was it core size and complexity? TDP issues of such a core design? Yield issues of such a core design? Current tech? Cost? A mixture of everything? Something had to push Nvidia in that direction of a nonunified rendering design.
Hopefully, AMD has figured out a more unified rendering approach.
Nvidia are asking you to pay now for bespoke hardware on the promise of what the hardware might be capable of if/when more games support it - assuming you haven't ditched your 2070/2080/Ti to the better gen 2 version in 18 months time anyway.
Let's be honest, this has always been Nvidia's approach. To get the gamers excited, more specifically the premium consumers, to invest more and more into their brand. To see what sticks, or fails. Nvidia's corporate culture of wanting to grow their (already colossus) market-share beyond their competitors.
Getting back on track, I would be extraordinarily surprised to see any iteration of this technology in the new consoles. If it's present, it will be so watered down as to be next to useless for the intended purpose or enhancing graphics significantly. Not unless AMD have something very cool cooking that they've not shown.
Disagree. If it's present, watered down, that would be a waste of die space and bad judgement on engineering. If it is present, fully capable, then more than likely first-party developers (i.e., Naught Dog, Guerilla, SMS, 343 Industries, The Coalition, etc.) would jump at the chance of using RT.