Upscaling Technology Has Become A Crutch

I don't feel that's the actual the take away from that discussion. The complaint was about unrealistic expectations and more so with FSR2 (lets avoid going down that IHV debate) and not that DLSS/FSR2 should not be factored in at all.

Related to expectations I also just don't feel foregoing upscaling is realistic here. If you look at the the entire video quoted they specifically discuss IQ issues with the 60 fps mode. However I just don't see the hardware leap was enough to 1080p30 -> 4k60 along with generational fidelity improvements (keep in mind we're well into the perceived improvements diminishing returns stage relative to the hardware) if we want to render everything in the so called "native resolution." As such without the current upscaling technologies there's going to have to be something else doing the upscaling. Or I guess what we just target native 1080p30 like last gen?

I think roughly the target for 60 fps should be 1080p60 upscaled to 4k60. Otherwise your image quality will be poor, but if you go to high too high you risk not having rendering power left to not look like a PS4 game. DF is right that there is some minimum you need to hit to upscale. FSR2 isn't great, and other reconstruction methods may actually be better. Spider-Man 2 is in the 1080p-1440p range in performance mode, which is probably appropriate, but I think they have better reconstruction than FSR2. I haven't watched the content to know if they're still using a custom method, but to me it's better than FSR.

I don't think any devs are "skipping" optimization and just relying primarily on upscale.
 
Last edited:
I don't feel that's the actual the take away from that discussion. The complaint was about unrealistic expectations and more so with FSR2 (lets avoid going down that IHV debate) and not that DLSS/FSR2 should not be factored in at all.
I wasn’t suggesting that at all though. I think that if they want to use upscaling, the base resolution needs to be high enough. In my estimate, you should aim to deliver a minimum of 1080p 60fps on the consoles. If you use FSR/DLSS to upscale from 1080p, it’s tolerable. Trying to upscale from 540p, 768p and all the other nonsense derivatives being used is simply unacceptable.
Related to expectations I also just don't feel foregoing upscaling is realistic here. If you look at the the entire video quoted they specifically discuss IQ issues with the 60 fps mode. However I just don't see the hardware leap was enough to 1080p30 -> 4k60 along with generational fidelity improvements (keep in mind we're well into the perceived improvements diminishing returns stage relative to the hardware) if we want to render everything in the so called "native resolution." As such without the current upscaling technologies there's going to have to be something else doing the upscaling. Or I guess what we just target native 1080p30 like last gen?
I don’t agree with this sentiment at all. No one is saying that they need to target 4k native at all. There’s more than enough power to deliver a next gen experience at 1080p60fps. I’d argue even at 1440p, there’s enough power. The issue is that you have to be extremely wise with your feature set. Developers are being rather unwise in general with their feature set.

Look at the attempts to use hardware RT? The hardware is not capable to deliver a good hardware RT implementation without eating a significant chunk of the render budget. It’s a giant waste of computational resources on console but you still have devs trying to fit a square peg into a round hole. Look at Spider-Man 2 which is praised for its RT implementation. It hardly looks like a generational upgrade from Miles Morales all for some subpar RT effects. They could have used a combination of dynamic cube maps, ssr, etc that would generally lead to a more performant effort.

In general, devs should stick to traditional raster methods on consoles until they have dedicate RT hardware capable of delivering performant experiences. Even look at AW2 on console? Remedy spending so much resources on effects making trying to make the world look so real. Unfortunately it couldn’t be more fake with the wacky physics, static’s world with barely any interactivity, low res shadows, subpar animation, etc. The image quality is so bad with fsr2, the aliasing is so poor. Then look at the ponytail physics, even tomb raider 2013 has better hair physics with tress fx or whatever it’s called. For a game where you walk slow mostly tight corridors, i feel like they could have done the minimum to hit a higher base resolution. It’s like they took the unoptimized effects from quantum break and turned them up to 11.
 
Last edited:
Recently I have been thinking about the upscaling side of this conversation, and how it relates to the increase in resolution and pixel density.
And i think i have come up with a half decent framework for this type of discussion......TV.
Given I come from working in TV/Film production background ( although not anymore ), I guess its not that surprising.
So bear with me here...

TV used to be 4:3 SD, essentially 720x576 @ 50i ( PAL) or 720x480 @59i ( ntsc ).
These resolution and pixel densities established the norms for what would be displayed on a screen, especially a TV screen.
Sure movies/Film are a perceptually a lot higher res, but thats in a cinema, at home it was still SD.
then we moved to HD, with formats like 720p and 1080i, and then onto 1080p.
Then eventually we moved to UHD/4K.

The point being that the amount of image we usually see on the screen hasn't changed much, although there is a good argument to be made
that the transition from 4:3 to 16:9 did actually increase the actual amount that a viewer would see.
What we have got though, is a huge increase in the detail of the objects we see on the screen.

So we can use this framework to discuss and compare upscaling methods, and possibly use the TV's built in upscaling as starting point.
Most people would agree that scaling SD to UHD is gonna look like shit.
What about 720p to UHD?
What about 1080p to UHD?

I like to think I've got a pretty good eye for this sort of stuff, years of watching 100% raw HD and UHD footage on reference displays has somewhat ruined me,
together with watching DF videos. But i'm probably more sensitive to bad digital TV encodes, than poor upscaling tech ala FSR or DLSS.
But most of the time i can't see the difference between a good upscale on a 1080p signal than i can a native UHD signal - when in motion that is.

It would be interesting to see what the very best offline scalers can do with SD / HD footage when scaling to UHD.
I suspect that nothing is going to be able to take an SD signal and produce a super crisp UHD output, with anything but the most basic bland image.

and thus, my actual point - expecting upscalers like FSR/DLSS/and the rest, to work from resolutions approximating SD are just unrealistic.
The amount of "stuff" shown on screen isn't increasing, but the detail IS. so we are probably OK to use upscalers, so long as the starting image has enough detail.

Currently imho that amount of detail is somewhere between 1080 and 1440p.

Again, I'd Love to see a comparison between FSR/DLSS etc and top end offline scalers, which are VERY good.
 
Back
Top