I don't understand why we keep going through this, its just one or two posters who pretend vrs is bad, invent vrs as a reason for why screenshots of unrelated games look bad, etc. Even so since I can't resist taking part...
On the 120fps mode for doom, per DF:
Xbox Series X operates at a dynamic 1800p, while PlayStation 5 tops out at 1584p - and it is visibly blurrier.
That should be enough to feel pretty confident vrs is doing its job of raising performance headroom without looking terrible. Even if you can zoom into screenshots and find lower res, if its pushing a pixel count between 1584p and 1800p, but the low res pixels are all areas devs intentionally chose as needing less resolution, it's a big win.
Some other things we know about VRS are that tier 1 VRS really isn't that great (and has hardware support hacks on older hardware), and that according to some devs (and common sense) it's less necessary on a deferred renderer, where you have a lot more control over what screen pixels you do which calculations on.
That said, clustered forward renderers are amazing for all of the stuff this forum usually cares about -- super high res (limited bandwidth concerns compared to deferred), high polygon counts, and super high framerates (just look at doom E which has some of the highest polygon counts, most lighting effects, etc, of any AAA game, and runs at 300+ fps on a normal PC setup). We already know some major devs that prefer them (id, infinity ward, avalanche), and I expect we'll see more as the generation continues.
Also, we've seen at least one major deferred title (gears 5, a ms first party title) use vrs tier 2 to great effect. Maybe that was a totally unnecessary tech demo, maybe its cheaper labor wise, maybe it actually out performs... but who cares, we at least have proof that its a valid tool in the toolbox even for deferred renderers.
This is a good hardware feature. That was honestly obvious before any games came out using it, because it's a
technique people already have been using for years that has a hardware specific implementation now -- those are almost always popular. But even if it sees relatively little use, plenty of other niche features (like tessellation, certain posters here's favorite hardware accelerated feature since an amazing looking ps5 launch title came out) see enough use to be worth the silicon.
Edit: One more thing. I don't really think Gavin Stevens counts as any kind of an expert. I actually have a little personal history with him (nothing negative) but even without that I think its pretty clear he's not particularly experienced. He's a hobbyist gamedev who has been working on one game for like 10 years. His opinion is as valid as anybody who has gotten the code up and running (aka, more valid than any of us), but I don't think people without shipped titles or AAA work should be presented as if they're super knowledgeable.