Digital Foundry Article Technical Discussion [2021]

Status
Not open for further replies.
Certainly the comparison here paints quite a stark contrast between DLSS and the TAA upscaler. Although it sounds like the base resolution on the XSX in this scene it somewhere between 1080p-1296p whereas on the PC it's 1440p so not a fair comparison. Still, I doubt DLSS performance mode (base res 1080p) would look anywhere near as bad as the XSX image here.

Check out the DF article - there I Show DLSS performance Mode against the TAAU as well. I also tested the TAAU on PC vs. DLSS back in the original PC Video I did a month ago.
I'm guessing the opposite. But DF decided to duck the comparison.
2060 super definitely would give you superior results based on what I see here.
 
an RTX 2060 has tensor cores to help for RT, 8GB of memory, consumes 160W alone, and still costs 500€, i don't see how it could fare baldy against new consoles.
even if it released 2 years ago.
A ferrari from the 2000s is still a ferrari, even if newer ones are better, it still can push against today's standard cars.
 
an RTX 2060 has tensor cores to help for RT, 8GB of memory, consumes 160W alone, and still costs 500€, i don't see how it could fare baldy against new consoles.
even if it released 2 years ago.
A ferrari from the 2000s is still a ferrari, even if newer ones are better, it still can push against today's standard cars.

I dont think price was ever a consideration here. Anyway, the RTX2060 was released 3 years ago (this sept), and it was the then lowest entry in the Turing RTX line. What it costs today due to the sillicon/ETH craze doesnt really matter.
The 2060 was to be found for a little over 300 dollars back in 2018.
 
Well according to hardwareluxx the RX 6700 can hit 59fps at 1080p (ultra settings) but can drop to ~38fps. It is much more stable on the X and resolution goes above. So I guess performance is not bad after all and it seems that details were reduced to hit that 60fps target. RTX 2060 super goes from 49 down to 38. So it is also not performing that great.
 
Don't mix up the 2060 super and 2060, they're pretty different cards in terms of performance. I doub't the 2060 super on a regular pc would perform as well at the same native resolutions on high settings, because there were clearly very careful optimizations and compromises made for the console version, but there are visual compromises on the xbox and no dlss so I'm sure in practice the 2060 super is a better bet.

Would love to see an in depth performance comparison (also including ps5!) though if anybody is listening!
 
I'm guessing the opposite. But DF decided to duck the comparison.
2060 super definitely would give you superior results based on what I see here.
Don't mix up the 2060 super and 2060, they're pretty different cards in terms of performance. I doub't the 2060 super on a regular pc would perform as well at the same native resolutions on high settings, because there were clearly very careful optimizations and compromises made for the console version, but there are visual compromises on the xbox and no dlss so I'm sure in practice the 2060 super is a better bet.

Would love to see an in depth performance comparison (also including ps5!) though if anybody is listening!
impossible to do accurate PC vs. console performance comps here as Metro is a game where you cannot individually tweak each visual setting, just a preset.
 
impossible to do accurate PC vs. console performance comps here as Metro is a game where you cannot individually tweak each visual setting, just a preset.

I understand the challenge of explaining it to viewers, I just personally want to see what ~medium settings on a midrange gpu and xbox settings look like visually and in terms of performance side by side.

Of course I have an xsx and a 2060 super myself so if it ever really bothers me I can just buy the game.
 
I just had a quick read of the article and the XSS seems like it's doing pretty good compared to the impression I actually got from some of the posts in here.
At its lowest and not the general resolution, sure going to be very blury in comparison to XSX and PS5, but for the most part pretty good.

30fps mode would've been nice to see how it compared given the impact on the TAA, is that something that can be tested on pc?

I'm surprised that they've only implemented VRS tier 1 though.
 
I just had a quick read of the article and the XSS seems like it's doing pretty good compared to the impression I actually got from some of the posts in here.
At its lowest and not the general resolution, sure going to be very blury in comparison to XSX and PS5, but for the most part pretty good.

30fps mode would've been nice to see how it compared given the impact on the TAA, is that something that can be tested on pc?

I'm surprised that they've only implemented VRS tier 1 though.
for it's power output it's doing very good. I'm still fairly impressed. For the marketed 1080p it's not doing so good.
 
I just had a quick read of the article and the XSS seems like it's doing pretty good compared to the impression I actually got from some of the posts in here.
At its lowest and not the general resolution, sure going to be very blury in comparison to XSX and PS5, but for the most part pretty good.

30fps mode would've been nice to see how it compared given the impact on the TAA, is that something that can be tested on pc?

I'm surprised that they've only implemented VRS tier 1 though.
It is still the "old" engine in background. No really new features were used. And as far as I understood, VRS tier 2 needs to be implemented more or less from the beginning. This is "just" an "old" game, where all light-sources were replaced by "new" ones. But it is still the old engine.
 
for it's power output it's doing very good. I'm still fairly impressed. For the marketed 1080p it's not doing so good.
I got opposite impression that it behave sometimes slower than paper difference suggest xsx vs xss but probably amount of ram increase the difference
 
for it's power output it's doing very good. I'm still fairly impressed. For the marketed 1080p it's not doing so good.

It's a shame there are no tools to measure resolution over time and frequency, as that'd be the best indicator for relative performance.

I wonder whether it's be worth checking something like 2-3 second intervals where frames are dropped in a cutscene. Even then I doubt all of those ~180 frames could be accurately counted. Nevermind how labourious that'd be.

Would be interesting to know the average resolution at the very least, as non-console gamers will state that the resolution is 1080p whereas console gamers will state 1700p (or whatever the max resolution is).

The truth is somewhere nebulously in-between.
 
It's a shame there are no tools to measure resolution over time and frequency, as that'd be the best indicator for relative performance.

I wonder whether it's be worth checking something like 2-3 second intervals where frames are dropped in a cutscene. Even then I doubt all of those ~180 frames could be accurately counted.

Would be interesting to know the average resolution at the very least, as non-console gamers will state that the resolution is 1080p whereas console gamers will state 1700p (or whatever the max resolution is).

The truth is somewhere nebulously in-between.
I have a tool that can do this, but only for basic dynamic upscale resolution. Temporal will throw off my algorithm, it's much harder to find the upscaling artifacts. There are alternative ways to do these types of comparisons, but it won't be a by the book pure 'pixel' counting method. You can have other metrics that are representative of detail between 2 screenshots, but you won't get the resolution if they do temporal. I am still researching a way to do temporal pixel counting, but I'm just not there. It's pretty hard lol. I need to spend more time with CUDA as well, it's too slow to decompress on CPU and perform analysis and write back to CPU to compress out as a video still to see what's going on.
 
I have a tool that can do this, but only for basic dynamic upscale resolution. Temporal will throw off my algorithm, it's much harder to find the upscaling artifacts. There are alternative ways to do these types of comparisons, but it won't be a by the book pure 'pixel' counting method. You can have other metrics that are representative of detail between 2 screenshots, but you won't get the resolution if they do temporal. I am still researching a way to do temporal pixel counting, but I'm just not there. It's pretty hard lol. I need to spend more time with CUDA as well, it's too slow to decompress on CPU and perform analysis and write back to CPU to compress out as a video still to see what's going on.

Well, that sounds incredible. If you're able to account for temporal upscaling within a decent accuracy, then you're preparing a priceless tool. Sell it to the DF guys when you're done. ;)

Make the whole process fully objective.
 
Well, that sounds incredible. If you're able to account for temporal upscaling within a decent accuracy, then you're preparing a priceless tool. Sell it to the DF guys when you're done. ;)

Make the whole process fully objective.

@ 6:10
Well, I'll leave it open source, I just haven't opened the code to the community because I'm ashamed at it looking like a hack job.
And after much back and forth with their feedback, I haven't been able to get the tool to a condition in which DF can publicly use it. I know at one point in time it was being trialed, but it's not sufficient enough for their needs. Not being able to do everything on the GPU is really making the whole algorithm useless. So the goals here are to copy the source videos to GPU memory, decode them there, process the frames, and write the results out when it's done, or encode a new movie to see what the the AI is processing. But that's easier said than done. I think it's doable, at least in theory it should be. I'll have to keep on it and make it run faster for them to use it more. I don't know if they use it to find graphical anomalies between versions right now, it can be 'ok' useful for smaller clips, but it's not fast enough to run though several hours of video.

fwiw: I never want to process another Mortal Shell video. But I had a good time running my 3950x all 12 cores for 24 hours straight. Needed 48GB to keep it from crashing, but it was fun project to take on. I may return at another time, but I'm working another indie title right now with Unity.
 
Last edited:
Status
Not open for further replies.
Back
Top