Yesterday I had a lot of fun testing resolutions.
Pathtracing in Cyberpunk 2077 is mandatory for me after the latest improvements. Thanks to the newly added Ray Reconstruction the image quality with pathtracing has increased significantly and the speed has increased about 8 % too.
Yet for UHD DLSS performance which is 1080p and would be the optimum on a FullHD or UHD screen the performance is not enough for me in all important places in the game. Therefore I tried something else.
First to the numbers:
DLSS Quality - 66.6% (2/3) per axis, 45% resolution
DLSS Balanced - 58% per axis, 33% resolution
DLSS Performance - 50% per axis, 25% resolution
DLSS Ultra Performance - 33% (1/3) per axis, 11% resolution
2880*1620 = 4665600
DLSS Quality = 1920*1080 = 2073600
DLSS Balanced = 1670.4*939.6 = 1569507.84
DLSS Performance = 1440*810 = 1166400
2560*1440 = 3686400
DLSS Quality = 1706.65*959.99 = 1638367.23
DLSS Balanced = 1484.8*835.2 = 1240104.96
DLSS Performance = 1280*720 = 92160
In two scenarios the speed was ideal for me. 1440p with DLSS quality and 1620p with DLSS balanced. The first one renders 1638367 and the second one temders 1569508 pixels natively. During testing I found the the image quality surprisingly good with both.
At 1440p and 1620p I can use DLDSR, which increases resolution through AI and has replaced the old DSR. With DSR I was only happy with UHD on my VTW60 Plasma. Native resolutions between FullHD and UHD had a big loss of sharpness on a FullHD TV which is no longer the case with DLDSR. Usually in-game resolution scaling is also much better than the old 1620p DSR or DSR 1440p with Gaussian filter.
Here is a comparison between 1440p with DLSS quality and 1080p.
The 1638367 pixels with AI look much better compared to the "real" 2073600 without DLSS and it also runs significantly faster. FullHD flickers on the left at the long straight lines at the houses or the scaffold at the house on the upper left, while the while the DLSS image is very stable and supersampled. Without DLSS there is also no ray reconstruction which makes pathtracing on the 1080p picture less accurate.
At first, it sounds terrible. I'am playing in a sub-FullHD resolution again which is slightly above 900p. Where will we end up if you go that low again...
But when I look at the results on the screen I'm fascinated. We're in a post resolution age. If you put 100 people in front of 1440p DLSS Q vs. 1080p native, the masses would clearly find 1440p DLSS quality superior when playing. That's visually closer to a reference template like 8K. I prefer to lower the "native" resolution to have "true" physically correct lighting with little error.
In the context of AI, resolutions are the best factor to optimize. The ideal would be to be able to adjust every percent screen resolution and have the quality increase smoothly.
Once pathtracing is standard and all the fake raster effects are gone, games will only have screen resolution, ray count, ray lenght, LOD, density, resolution set for textures, smoke, particle effects, clouds, etc. Maybe by then there will be no more LOD and something like Nanite will be standard.