I was scouring the web trying to find RT benchmarks for different games and while looking through various written reviews, I noticed that upscaling was used in the graphs for several games. On many sites, reviewers didn't even bother to chart the native performance of the GPUs in their RT benchmarks. An example of this is GamersNexus review of the 5090. Dying light, Resident Evil and Black Myth Wukong have upscaling factors baked into their graphs. This extremely problematic for a variety of reasons.
The first reason is that there is potential variability in upscaling factors. As we've seen with Intel's Xess, gpu vendors are able to change the upscaling factors leading to unlike comparisons between GPUs. If a reviewer were to make the mistake of equating DLSS quality to XeSS quality, it would be an unequal comparison as it relates to upscaling factors.
Secondly, there is potential variability in upscaling performance cost between the GPUs being compared. In GamersNexus review, they use FSR to compare all GPUs. However, FSR has a performance cost and that cost is not necessarily equal between all gpu vendors. This adversely impacts the comparison. Furthermore, it may be the case that other vendors have upscaling algorithms that have a lower performance cost at an equivalent or superior quality level. In that instance, the data isn't particularly helpful.
Furthermore, we've now seen a new change with DLSS 4 which allows the user to replace the CNN model with the new TNN model in games. With this development, it's quite easy to envision a future where this is done automatically and not on a game by game basis. As we've seen, the TNN model has a higher performance cost. If the model becomes something that is automatically replaced, it renders benchmark data as useless.
Finally, when comparing historical performance between GPUs, things become unnecessarily difficult if the raw rt data is not available. By raw RT data, I mean traditional benchmarks with no upscaling, ray reconstruction and other software features. While it's helpful to provide upscaling benchmarks to the viewers, it should not be done at the expense of the raw data. Raw benchmarks should be always be prioritized and upscaling benchmarks should be treated as an "additive".
The first reason is that there is potential variability in upscaling factors. As we've seen with Intel's Xess, gpu vendors are able to change the upscaling factors leading to unlike comparisons between GPUs. If a reviewer were to make the mistake of equating DLSS quality to XeSS quality, it would be an unequal comparison as it relates to upscaling factors.
Secondly, there is potential variability in upscaling performance cost between the GPUs being compared. In GamersNexus review, they use FSR to compare all GPUs. However, FSR has a performance cost and that cost is not necessarily equal between all gpu vendors. This adversely impacts the comparison. Furthermore, it may be the case that other vendors have upscaling algorithms that have a lower performance cost at an equivalent or superior quality level. In that instance, the data isn't particularly helpful.
Furthermore, we've now seen a new change with DLSS 4 which allows the user to replace the CNN model with the new TNN model in games. With this development, it's quite easy to envision a future where this is done automatically and not on a game by game basis. As we've seen, the TNN model has a higher performance cost. If the model becomes something that is automatically replaced, it renders benchmark data as useless.
Finally, when comparing historical performance between GPUs, things become unnecessarily difficult if the raw rt data is not available. By raw RT data, I mean traditional benchmarks with no upscaling, ray reconstruction and other software features. While it's helpful to provide upscaling benchmarks to the viewers, it should not be done at the expense of the raw data. Raw benchmarks should be always be prioritized and upscaling benchmarks should be treated as an "additive".