Digital Foundry Article Technical Discussion [2021]

Status
Not open for further replies.
@ 6:10
Well, I'll leave it open source, I just haven't opened the code to the community because I'm ashamed at it looking like a hack job.
And after much back and forth with their feedback, I haven't been able to get the tool to a condition in which DF can publicly use it. I know at one point in time it was being trialed, but it's not sufficient enough for their needs. I'll have to keep on it and make it run faster for them to use it more. I don't know if they use it to find graphical anomalies between versions right now.

fwiw: I never want to process another Mortal Shell video. But I had a good time running my 3950x all 12 cores for 24 hours straight. Needed 48GB to keep it from crashing, but it was fun project to take on. I may return at another time, but I'm working another indie title right now with Unity.

Oh, that's super interesting. Really excellent that you're making it open source at the moment.

I'd personally be very interested to see the data outputs from your software. I guess you count x millions of pixels over particular intervals to see percentage difference between different hardware?
 
Oh, that's super interesting. Really excellent that you're making it open source at the moment.

I'd personally be very interested to see the data outputs from your software. I guess you count x millions of pixels over particular intervals to see percentage difference between different hardware?
I don't do that right now, that's always been a side option I've considered. But counting pixels requires machine training then, and I didn't want to go that method just yet. I'm using traditional image quality analysis and doing my best to stay away from supervised training. The data sets are too large, and to build that entire pipeline and train something would be costly.

Nothing too controversial here =P
Here is XSX vs PS5 Warzone. This is an image quality metric, which is a determination of visible detail on the screen. This is an exact replay replica. So higher on the chart means more detail. This is the entire clip.

In actuality XSX is running a higher dynamic resolution, as per the patch notes. But this tool should be able to pick up on other features like quality differences with textures and AF. But that requires a lot more tuning and setting up to catch that I think. It certainly requires more sensitivity, so I would likely need to break the image into blocks and process block difference as opposed to measuring the screen shot, where that difference in detail is being loss by the averaging.

Since we're talking about it, I could use the help rebuild this program using pure CUDA with encoding/decoding and processing all staying on the GPU. I mean, that would pretty much make this tool usable. So if someone reading this wants to help send me a PM.

As you can see below, advertised resolution differences may not always manifest into a large difference, but here it is quantifiable. I lost my mortal shell comparisons, but there is nearly 0 difference between the 1440p and 1800p patches.

psMveJd.jpeg
 
Last edited:
As you can see below, advertised resolution differences may not always manifest into a large difference, but here it is quantifiable. I lost my mortal shell comparisons, but there is nearly 0 difference between the 1440p and 1800p patches.
what dynamic resolution brings is that we usually looking around when we don't fight and dynamic res can bring upper res range in this scenario, thats really nice trick
 
That's incredibly interesting.

Not only is the XSX measurably performing better with image quality, but the PS5 has large drops in the graph, where the XSX doesn't have the equivalent. I wonder if that's manifestation of the PS5's variable frequency? Depends on the time measured I guess.
 
That's incredibly interesting.

Not only is the XSX measurably performing better with image quality, but the PS5 has large drops in the graph, where the XSX doesn't have the equivalent. I wonder if that's manifestation of the PS5's variable frequency? Depends on the time measured I guess.
Both are running BC here, so it's fairly unoptimized I think. This metric was only useful to serve as a verification of known facts to see if it was able to pick up on these differences. It failed terribly with Demon Souls for instance =P

But displaying graphs like this really broke the narrative of upper and lower bounds on pixel counting. I think I saw a lot of forums using these numbers from VG Tech or from DF as a method of who won (console warring) - sort of ignoring really what lower and upper bounds means.

When you look at these numbers like this, things can criss cross. IIRC there was one game in which PS5 had lower numbers on counted pixels for their bounds, but my tool picked up that it had consistently higher image quality over the XSX version for 2 clips that are fairly close.

I think having been an outsider for a while now, and now having gotten a peek at their process working with them; it's very difficult to have bias like people accuse them to have. You can actually have _raw_ data, and sometimes you're just forced to report the data as it is. They do try their best to not over/under embellish the data points either, and clearly their graphs are likely smoothed as well, so people don't see the see-saw I have above, it would be too hard to read.

I think if more people really understood the inner workings here and the amount of effort expended to do this type of work, people could stop console warring or at least flinging shit at these guys and just focus on other things. I suppose that's just a work in progress.
 
Last edited:
I don't do that right now, that's always been a side option I've considered. But counting pixels requires machine training then, and I didn't want to go that method just yet. I'm using traditional image quality analysis and doing my best to stay away from supervised training. The data sets are too large, and to build that entire pipeline and train something would be costly.

Nothing too controversial here =P
Here is XSX vs PS5 Warzone. This is an image quality metric, which is a determination of visible detail on the screen. This is an exact replay replica. So higher on the chart means more detail. This is the entire clip.

In actuality XSX is running a higher dynamic resolution, as per the patch notes. But this tool should be able to pick up on other features like quality differences with textures and AF. But that requires a lot more tuning and setting up to catch that I think. It certainly requires more sensitivity, so I would likely need to break the image into blocks and process block difference as opposed to measuring the screen shot, where that difference in detail is being loss by the averaging.

Since we're talking about it, I could use the help rebuild this program using pure CUDA with encoding/decoding and processing all staying on the GPU. I mean, that would pretty much make this tool usable. So if someone reading this wants to help send me a PM.

As you can see below, advertised resolution differences may not always manifest into a large difference, but here it is quantifiable. I lost my mortal shell comparisons, but there is nearly 0 difference between the 1440p and 1800p patches.

psMveJd.jpeg
I hope you can return to this at some time - it was incredibly fascinating
 
Both are running BC here, so it's fairly unoptimized I think. This metric was only useful to serve as a verification of known facts to see if it was able to pick up on these differences. It failed terribly with Demon Souls for instance =P

But displaying graphs like this really broke the narrative of upper and lower bounds on pixel counting. I think I saw a lot of forums using these numbers from VG Tech or from DF as a method of who won (console warring) - sort of ignoring really what lower and upper bounds means.

When you look at these numbers like this, things can criss cross. IIRC there was one game in which PS5 had lower numbers on counted pixels for their bounds, but my tool picked up that it had consistently higher image quality over the XSX version for 2 clips that are fairly close.

I think having been an outsider for a while now, and now having gotten a peek at their process working with them; it's very difficult to have bias like people accuse them to have. You can actually have _raw_ data, and sometimes you're just forced to report the data as it is. They do try their best to not over/under embellish the data points either, and clearly their graphs are likely smoothed as well, so people don't see the see-saw I have above, it would be too hard to read.

I think if more people really understood the inner workings here and the amount of effort expended to do this type of work, people could stop console warring or at least flinging shit at these guys and just focus on other things. I suppose that's just a work in progress.

Would it be possible to spit out the numbers of the individual frames?

E.g., Frame 1 = 8,294,400 pixels

Measure over a set interval, 2-3 mins of matched gameplay/cutscene, then determine the total pixels rendered over that period. Then simply have the percentage difference between two platforms as the recorded value reported on videos, etc.

Looking at your graph, it'd likely be broadly in line with the calculated differences between the two systems. I'd personally find that incredibly interesting to see regardless.

Really amazing work.
 
Would it be possible to spit out the numbers of the individual frames?

E.g., Frame 1 = 8,294,400 pixels

Measure over a set interval, 2-3 mins of matched gameplay/cutscene, then determine the total pixels rendered over that period. Then simply have the percentage difference between two platforms as the recorded value reported on videos, etc.

Looking at your graph, it'd likely be broadly in line with the calculated differences between the two systems. I'd personally find that incredibly interesting to see regardless.

Really amazing work.
Every single frame is a 4K frame as they are recorded by the HDMI output to the recording unit. So the pixels will always be in this case 8294400 pixels. The upscaling method makes it harder or easier for us to pixel count. If they use a non-temporal upscale (upscaling right before output), I can use another tool to count it using frequency analysis on the frame we can determine the exact pixel size of the original frame before the upscale occurred. This is trivially easy (by comparison), and basically applicable to all the older titles I think. I should finish that tool off when I think about it, or work on that path further since it actually is capable of pixel counting. But for another time I think.

When the developers do temporal upsampling, and then do post processing on an up sampled image, you can't use frequency analysis on it anymore because now you have detail across the full 4K pixels. So now it gets incredibly complex to figure out what the image was before the upsample since they started doing more work on it after upsample. My tool recorded Demon Souls fluctuating at around 1800p which is wrong, since I had set it to the 1440p mode. I suspect reverse engineering DLSS would be near impossible.

So the pixel counting is incredibly tough to figure out with these new methods, and it's not as simple as just identifying where to pixel count and then automatically pixel counting. That 's a crazy hard endeavor, you likely wouldn't be able to count every single frame. If I were to attempt automatic pixel counting, I need to identify the upscale by using it's artifacts from upscaling (as there are _TONS_, and likely I assume upscaling from different resolutions causes different artifact patterns). But that's for another time for me to think about. I need a faster way to process and train a whole video and a 1070 won't cut it to iterate the solution. And right now too hard to find a new nvidia GPU on the market for me to get.

There are additional other methods to pursue as well. I tried another method to see if that would help DF, a second entirely different method, but the calculation time was even slower. And perhaps too sensitive. But it compares exactly 2 life for like images and makes a determination of how far apart by % 2 images are per frame. This one requires an incredible amount of tuning because this will pick up encoding differences.
 
Last edited:
Both are running BC here, so it's fairly unoptimized I think. This metric was only useful to serve as a verification of known facts to see if it was able to pick up on these differences. It failed terribly with Demon Souls for instance =P

But displaying graphs like this really broke the narrative of upper and lower bounds on pixel counting. I think I saw a lot of forums using these numbers from VG Tech or from DF as a method of who won (console warring) - sort of ignoring really what lower and upper bounds means.

When you look at these numbers like this, things can criss cross. IIRC there was one game in which PS5 had lower numbers on counted pixels for their bounds, but my tool picked up that it had consistently higher image quality over the XSX version for 2 clips that are fairly close.

I think having been an outsider for a while now, and now having gotten a peek at their process working with them; it's very difficult to have bias like people accuse them to have. You can actually have _raw_ data, and sometimes you're just forced to report the data as it is. They do try their best to not over/under embellish the data points either, and clearly their graphs are likely smoothed as well, so people don't see the see-saw I have above, it would be too hard to read.

I think if more people really understood the inner workings here and the amount of effort expended to do this type of work, people could stop console warring or at least flinging shit at these guys and just focus on other things. I suppose that's just a work in progress.
I think this is the future of pixel counting! Measuring averages of perceptible resolution is the way to go. Indeed often the official resolution (of geometry) doesn't tell the whole story about the final image seen by the players because of assets and effects resolution, AA and such.
IIRC there was one game in which PS5 had lower numbers on counted pixels for their bounds, but my tool picked up that it had consistently higher image quality over the XSX version for 2 clips that are fairly close.
By the way what game was that? Maybe due to difference of post effects?
 
I think this is the future of pixel counting! Measuring averages of perceptible resolution is the way to go. Indeed often the official resolution (of geometry) doesn't tell the whole story about the final image seen by the players because of assets and effects resolution, AA and such.
Yea, it gets closer to the what the player will see. Some interesting challenges here is volumetric fog. It really ruins the calculations, so absence of fog will improve metric score massively.

By the way what game was that? Maybe due to difference of post effects?
Something similar to another Dark Souls style game. I want to say it was Mortal Shell. It's 1440p was consistently better than what XSX was putting out despite XSX being at 1800p. Which is ironic because of the undue pressure to move PS5 to 1800p to match XSX. My tool was saying it's 1440p was equal to or better lol. The 1800p patch made things worse lol.

But there are downfalls with my algorithm when concerning the amount of volumetric fog on screen.

If the goal for the readers is to guage performance, there needs to be some form of image quality * resolution * frame rate or something. It can't be one or the other. It would be an interesting metric to create, but really hard to make it work for all cases.
 
Last edited:
That's incredibly interesting.

Not only is the XSX measurably performing better with image quality, but the PS5 has large drops in the graph, where the XSX doesn't have the equivalent. I wonder if that's manifestation of the PS5's variable frequency? Depends on the time measured I guess.
According to DF tests both versions of COD Warzone are running at their maximum BC resolution (Pro and X1X) all the time. So it's basically 1528p vs 2160p. No DRS involved.

https://www.eurogamer.net/articles/digitalfoundry-2020-call-of-duty-warzone-next-gen-showdown
 

A lot of interesting things from that.

They've been hearing from a lot of developers that RT reflections on fast moving vehicles are very very hard, hence why we haven't really seen it.

Their dynamic scaling scales resolution in addition to IQ settings. Sounds like they prefer to scale down IQ settings that would be virtually unnoticeable when driving fast before they start to scale down resolution. I like that.

Beautiful game. I keep saying I might try X racing game after roughly 2 decades of not playing a racing game. And I'll say it again, but this time I might actually do it because the game looks so phenomenal.

Regards,
SB
 
Full Interview Article @ https://www.eurogamer.net/articles/digitalfoundry-2021-metro-exodus-tech-interview

The Making of Metro Exodus Enhanced Edition
A Digital Foundry tech interview.

Metro Exodus's Enhanced Edition is a very important game - it's the first triple-A game we know of that's built on technology that demands the inclusion of hardware-accelerated ray tracing hardware. To be clear, the new Metro is not a fully path-traced game built entirely on RT, but rather a hybrid renderer where global illumination, lighting and shadows are handled by ray tracing, while other elements of the game still use traditional rasterisation techniques. The bottom line though is that this is the foundation for developer 4A Games going forward: its games will require a PC with hardware RT graphics capabilities, while their console versions will tap into the same acceleration features found on the ninth generation consoles. And while 4A is the first developer to push this far into next generation graphics features, it's clearly not going to be the last.

We've already reviewed the PC version of Metro Exodus Enhanced Edition and will be following up in due course with detailed analysis of the PS5, Xbox Series X and Series S renditions of the game. However, in putting together our initial coverage of the game, 4A Games were extremely helpful and collaborative in ensuring the depth and accuracy of our work. If you've seen our video breakdown of the title, you'll have seen the behind the scenes editor shots showing level design workflow before and after the transition to ray tracing - but that's just the tip of the iceberg. 4A were also very helpful in going deep - really deep - in explaining how their RT implementation works. On top of that, the developer gave us an excellent overview of the project: why it was time to move their engine to RT, how so many new technologies made their way into Metro Exodus Enhanced Edition, and why we need to wait a little while until the console versions are released.

...
I just grabbed Metro Exodus Gold Edition (includes both DLCs) for £16, it's on sale on the PSN, if anyone is interested!! Assuming (hoping) this is the same as the "complete edition" 4A was talking about. Free upgrade to PS5 so I think it is.

EDIT: Yep, it's the PS5 upgrade. for £16 you can't go wrong!
 
Last edited:
We've taken a good, long, hard look at how Metro Exodus Enhanced Edition runs on Xbox Series X and S consoles - so how does PlayStation 5 slot into the stack? Alex Battaglia and John Linneman link up to share their thoughts on the PS5 build.

Quick Notes:
* Matching settings.
* Loading times pretty much a wash.
* XBSX has a slightly higher overall resolution.
* PS5 has a slightly higher overall performance.
 
Last edited:
Does the RTGI accumulate over the same number of frames between console and PC?

It's a nice early showing for both boxes. I'd definitely like to see more of these GI techniques. More impactful than RT reflections (obviously both at the same time is better still)
 
Status
Not open for further replies.
Back
Top