Again, if a feature becomes unused then how is it supposed to be helpful ? Considering most of the big developers have had a lead time of about 1 year since they've received their dev kits, it should be a cause for concern to the series S that they aren't using this feature from the early onset in this new cycle when we go by recent trends.
Who said RT
won't be used on Series S? It's baked into the hardware, if it's there then someone's going to use it, be it a MS developer or a 3P developer. In fact it's already in use in select games like DMC5.
Talking about Series devkits is really tricky because we know from documented fact that early devkits would not have been reflective of final silicon even going one year back because for a long time even Sony's devkits, like Oberon C0, had RT disabled because enabling it would crash the devkits (that's what I've heard of late anyway; I always though RT didn't show up because they were doing regression testing for PS4 and PS4 Pro BC...maybe both are true?).
Microsoft had to wait on AMD for finalization of RDNA 2 features and that meant Series devkits were running behind, as well as the GDK software suite stability. Devs like Capcom weren't even sure if they would be able to have RT in DMC5, literally two weeks up to its release. Before that they were seriously going to throw it in as a patch, and we're only a little over four months removed from that.
If we take a look at the past cycle and the one before that, you'll notice that there are resolution drops from new releases overtime by developers rather than maintaining a resolution throughout the cycle. Arguably, the best time to implement and showcase ray tracing on the Series S would've been immediately in this generation where games have far simpler art/rendering pipelines but the fact that developers are finding out that they don't have the headroom left to do this suggests that the system doesn't have any further capacity to handle more general purpose or generic art/rendering pipelines with the incoming inevitable resolution drops. If these pipelines are nearly out of reach now for a less powerful system then what are the odds of an even weaker system running future games that will use more powerful and demanding pipelines ?
What games in particular are you referring to? The only developer of late we've heard explicitly touch on difficulty of porting games to Series S is the Control dev, and that is specifically with their title. Even that developer has said that the S is very capable, but particularly with next-gen titles as those can leverage the new RDNA 2 features. If anything, we should see more RT in future Series S titles once 8th-gen cross-gen dev has been more or less left behind.
It's not as easy as saying cross-gen games should "just work" on Series S and feature RT because the games are simpler in art/rendering pipelines because that actually isn't 100% true. A lot of the bigger AAA games from 8th gen at the tail-end have pretty demanding rendering pipelines and complex artstyles, whereas some of the titles we've seen so far targeting 9th-gen systems specifically may not be as big-budget (since they're targeting a smaller install base), and may not be as demanding. There's also the issue of certain game engines needing to be rebuilt in areas to accommodate the 9th-gen systems.
I never said that these features won't be helpful but it's a naive assumption that otherwise will apply in the general case as we can already see that this is not true for one current system. As for SSR, I'm pretty sure the hardware designers didn't build hardware acceleration for that technique and it's just something that was invented and implemented by graphics programmers because there were no other alternatives. Even then SSR will be largely replaced with RT reflections in the long run because it's fundamentally more consistent to begin with so expecting developers to hold back making progress on their own art/rendering pipeline by not removing these limitations for one platform is purely egotistical ...
Well, I guess we'll have to see. SSR maybe wasn't the best example but there's been plenty of examples of other techniques that were invented and ran through software code on older GPUs that have since seen dedicated silicon to handle them via hardware in newer designs. ML is one such thing; it wasn't that long ago that ML models and programs targeted CPUs and simply leveraged what extended math co-processing features those had. It'd move to GPUs a bit later but even then ML-specific hardware support for things like FP16, INT8 etc. wouldn't make their way into GPU designs for a good long while.
I don't think SSR will ever fully go away; I actually am not sure if PS5 and Series X have enough capability to run full RT and provide 4K (or near 4K) rendering @ 30 FPS, let alone 60. So if we're going to see a mix of RT and SSR even on those systems as the gen wears on, we'll surely see it on Series S and Switch Pro/Switch 2. And developers will be mindful of that at the onset.
The first paragraph above already outlined why the opposite will happen. The chances of developers being able to find enough frame time to integrate these features will start to become slimmer once resolution drops will set in later in the cycle. If they do release a hypothetical portable device with these features during this year then it will never see these benefits seeing as how the next closest system (Series S) that's struggling will be almost twice as powerful even with the highest clocks. The probability of seeing these benefits could improve if they decide to push back the release but developers still have to design their games around for portable use which will certainly see lower clocks so that's another roadblock for them to deal with ...
I think we need some more, concrete details on Switch Pro/Switch 2 before arriving to some of these conclusions. And we should also remember that AMD and Nvidia's architectures are in a lot of ways pretty different despite having more similarities than, say, either with Intel's Xe or Apple's stuff. We can't just look at the raw TF and go "Series S has over 2x the TF" and assume that's that. Because, we can already look at AMD's and Nvidia's current cards and see while AMD's either compete evenly or beat Nvidia (some heavily) in rasterized tasks, that's oft-times before some of Nvidia's advantages like DLSS and RT via the Tensor cores enter the picture. Currently AMD has no equivalent for those on RDNA 2; their RT is tied to the CUs and Fidelity FX is not hardware-accelerated in the way Tensor Cores are (although in Microsoft's case, their systems support DirectML which is a hardware-based solution in some manner).
For how resolution drops might impact things, I don't see how it's too much a concern as DRS can take care of it, and games leveraging whatever DLSS support Nintendo includes (probably DLSS 2.0) can operate with lowered texture quality and internal resolution which helps with frame times, use DLSS to scale the image up to a desired target resolution (it doesn't have to be 4K for the output); that saves on frame budgets and those savings can be used for any varying degree of RT. It'll just take some smart design choices but for serious devs this shouldn't be an issue after getting acclimated with things.
There exists plausible reasons why RT won't materialize on much weaker platforms for sometime such as an existing system struggling, more demanding art/rendering pipelines being introduced overtime, designing for worst case power limits and all of these combined are more than valid explanations as to why other platforms won't be able to follow high-end systems. DLSS isn't going to change this outcome either since we have an example of a low-end console already reaching it's limit early in the cycle when it's running 1080p with art/rendering pipelines that don't feature ray tracing so possibility of DLSS somehow making more room to run more generic pipelines largely unsubstantiated especially on much a less powerful system ...
A lot of this, again, is predicated on judging Series S somewhat unfairly. The devkits were constantly running behind, the GDK was taking longer to stabilize, MS had to wait on AMD for certain features (some of which still aren't ready), devs needed time (some still need time) to acclimate to GDK over XDK, engines have to be retooled, porting teams may or may not have the required manpower and funds to prioritize optimizations in certain ports, etc. etc.
Features like RT aren't going to be used wholesale even on PS5 and Series X; there will be compromises there. But while you're right about rendering pipelines getting more complex, these systems have the features to accommodate that. I'm not even talking about DLSS here, but other things like Mesh Shaders, VRS (Tier 1 and Tier 2), SFS etc. . Admittedly, very Series-centric things on the console side, but Nvidia's hardware has the same features which means the Switch Pro/Switch 2 will also support them.
We have to look at these systems in the context of the whole of their capabilities, not just a single feature or two or a single metric like raw compute. Everything has to orchestrate together in order to enable absolute peak benefits. This is especially true for systems like Series S and Switch Pro/Switch 2, but once they are, things like RT
will be more commonplace.