Digital Foundry Article Technical Discussion [2021]

Status
Not open for further replies.
The Xbox One has hardware features for Backwards Compatibility, thus BC is not and never was a reaction or a response. It was planned for before the SOC was even taped out.

Yes but an increased focus and investment on it could very well be partially reactive.
 
The Xbox One has hardware features for Backwards Compatibility, thus BC is not and never was a reaction or a response. It was planned for before the SOC was even taped out.
Are we sure those features are for backwards compatibility of the user facing type, and not features there to aid developers making cross generational games that leverage certain exotic Xbox 360 features. Again, I never said that there was no work done before BC was publicly available. I'm sure it's something they were exploring. But in 2013 they were talking publicly about using the power of the cloud for BC, and pretty much everything else. It took them basically 2 years to get a mostly working version out to the general public, required specifically repackaged code, and had a very limited library. If BC is baked into the hardware so deep why is there a requirement to run repackaged code?
 
If BC is baked into the hardware so deep why is there a requirement to run repackaged code?
The parts of 360 that were too expensive to emulate were brought over in hardware.
BC took a back burner when they were sidelined by PS4s move to 8GB. They were expecting 4GB and that they would be competitive.

Not to mention OS and kinect were also not anywhere ready for launch.
 
DF Article @ https://www.eurogamer.net/articles/...arvels-avengers-next-gen-console-head-to-head

Marvel's Avengers tested on PS5 and Xbox Series consoles
A Stark difference?

Last week, we took an exclusive look at the PlayStation 5 version of Marvel's Avengers for PlayStation 5, but Xbox Series code - difficult to give in advance owing to Smart Delivery complications - was not forthcoming. So how have developers Crystal Dynamics and Nixxes handled the port to both Series X and Series S consoles? The results are in, and we are looking at a game which exemplifies what we have referred to as the 'post resolution era' - where raw pixel counts are only one component in a game's visual make-up and perhaps not the most important.

In terms of the overall set-up offered by Marvel's Avengers on Xbox Series consoles, it's much the same as PlayStation 5 in delivering quality and performance modes. Series X's quality mode is essentially the same as PS5's: compared to last-gen consoles, users get higher resolution screen-space reflections, more destruction, better water rendering, higher resolution textures and all of the other enhancements previously detailed. It targets native 4K rendering with minor dynamic resolution tweakery depending on content. Series S? Quality mode is much the same again with a 1440p target, but lacks some visual features including the higher resolution texture pack enjoyed by PS5 and Series X. 30fps is essentially a lock on all systems in this mode.

The performance mode is what separates the pack, with Series S operating within a ballpark 720p to 1080p window in hitting 60fps, with further visual cutbacks like reduced foliage density, murkier texture filtering and lower resolution particle effects. But it's the PS5 vs Series X differences that are most fascinating - both target a 4K display output, but PS5 uses checkerboard rendering while Series X goes native. What this means is that the Sony platform can resolve higher pixel counts in like-for-like content thanks to its checkerboard solution, but for various reasons, Series X delivers a crisper picture. Both still use dynamic resolution scaling, with Series X working with a generally wider DRS window. Owing to its checkerboard solution, PS5's UI scales with resolution too, which looks a touch odd.


...
 
The video does a good job with the comparing the images, but I don't get the language been used. Feels like sticking to some sony brand guideline by default instead of saying what it is -- saying the numbers don't tell the whole story is true if you use the output numbers, but it's actually about what you'd expect in terms of fuzziness from cutting the horizontal resolution in half temporally.

Like, okay, 3228x1872 sounds lower than 3548x2016, but half of that 3548 (at best) is from last frame, so if youre looking at a scene in fast motion its something closer to 1774x2016 vs 3328x1872... what do you expect? That's 60% as many pixels!

I guess on the other hand, this will be much harder to count for VRS, so maybe using fuzzy language about perceptual image quality is best for comparisons going forward? But still, I feel like with dlss people say the render res, not the reconstructed res.
 
The video does a good job with the comparing the images, but I don't get the language been used. Feels like sticking to some sony brand guideline by default instead of saying what it is -- saying the numbers don't tell the whole story is true if you use the output numbers, but it's actually about what you'd expect in terms of fuzziness from cutting the horizontal resolution in half temporally.

Like, okay, 3228x1872 sounds lower than 3548x2016, but half of that 3548 (at best) is from last frame, so if youre looking at a scene in fast motion its something closer to 1774x2016 vs 3328x1872... what do you expect? That's 60% as many pixels!

I guess on the other hand, this will be much harder to count for VRS, so maybe using fuzzy language about perceptual image quality is best for comparisons going forward? But still, I feel like with dlss people say the render res, not the reconstructed res.
resolution isn't the be all and end all of image quality however. DF is right to steer the discussion away from pixel counting.
Pixel counting is probably now being used as a measure of horsepower being extracted. We can have a solid discussion about upscaling techniques, and that in its own right each type of upscaling technique has it's own pros and cons which can be mitigated with more pixels of course.

However, image quality is really just a look at the final output. We should be rewarding developers for finding methods that improve image quality without needing to increase the horsepower throughput to obtain it.

But if the goal is talk the technical around horsepower extraction, by all means, it's perfectly game here. But the DF videos are here to educate the masses; signaling a move away from pixel counting seems ideal.

I shouldn't care that something is using TAA, DLSS, CBR, DRS, VRS etc. The pixel count shouldn't matter, we should be taking a look at the objective output and comparing that to each other. If visually people cannot see the differences, we should be celebrating the success by developers here.
 
resolution isn't the be all and end all of image quality however. DF is right to steer the discussion away from pixel counting.
Pixel counting is probably now being used as a measure of horsepower being extracted. We can have a solid discussion about upscaling techniques, and that in its own right each type of upscaling technique has it's own pros and cons which can be mitigated with more pixels of course.

However, image quality is really just a look at the final output. We should be rewarding developers for finding methods that improve image quality without needing to increase the horsepower throughput to obtain it.

Well, yeah, it is! That's a lot of what the GPUs do -- run fragment shaders. If one console is performing the same but rendering half as many pixels, that's a story!

I agree that we should praise image reconstruction techniques -- visuals in games just aren't going to advance without decreasing resolution somewhere, be it temporal (rendering effects with low sample counts and resolving with taa), spatial (vrs, drs), or both (checkerboarding, dlss) -- all these reconstruction techniques are awesome and should be widely adopted. But call a spade a spade -- one version of the game is doing much less rendering work than the other.

Edit: in response to your added paragraph, the thing is, you can see the difference here, and it's obvious why. Moving surfaces, things that have additional low temporal resolution, etc, are all significantly burrier and show rectangular pixels. That's not a mystery of the differing techniques, thats a consequence of rendering half as many pixels each frame.
 
Well, yeah, it is! That's a lot of what the GPUs do -- run fragment shaders. If one console is performing the same but rendering half as many pixels, that's a story!

I agree that we should praise image reconstruction techniques -- visuals in games just aren't going to advance without decreasing resolution somewhere, be it temporal (rendering effects with low sample counts and resolving with taa), spatial (vrs, drs), or both (checkerboarding, dlss) -- all these reconstruction techniques are awesome and should be widely adopted. But call a spade a spade -- one version of the game is doing much less rendering work than the other.
Yea and that's okay ;)
When we are talking shop, I agree with you. There's more about curiousity of how things work here, and why things work they do, that's something we do here.
 
The video does a good job with the comparing the images, but I don't get the language been used. Feels like sticking to some sony brand guideline by default instead of saying what it is -- saying the numbers don't tell the whole story is true if you use the output numbers, but it's actually about what you'd expect in terms of fuzziness from cutting the horizontal resolution in half temporally.

Like, okay, 3228x1872 sounds lower than 3548x2016, but half of that 3548 (at best) is from last frame, so if youre looking at a scene in fast motion its something closer to 1774x2016 vs 3328x1872... what do you expect? That's 60% as many pixels!

I guess on the other hand, this will be much harder to count for VRS, so maybe using fuzzy language about perceptual image quality is best for comparisons going forward? But still, I feel like with dlss people say the render res, not the reconstructed res.
A-HA! I thought so too, so I did wonder why the surprise from Alex in seeing a sharper image on X. The PS5 is rendering fewer pixels for all intents and purposes, hence the IQ hit. Or at least that's how I understand checkerboard.
 
A-HA! I thought so too, so I did wonder why the surprise from Alex in seeing a sharper image on X. The PS5 is rendering fewer pixels for all intents and purposes, hence the IQ hit. Or at least that's how I understand checkerboard.
Not for intents and purposes, it just is. The trick is, where possible (and afaik most of the complexity of the algorithm is in 'where possible'?), it re-uses the pixels from the last frame to fill out the pixels that it's not rendering, so when things aren't moving fast you get an image that looks basically identical to native res. This is why so many ps4 pro (and even base ps4!) games look so incredible and push relatively high display resolutions.
 
I think the thing exacerbating the issue of CB rendering artifacts in this game is the fact that post-processing and UI elements are including in the CB process. If both elements were decoupled from it, I bet the presentation would be a lot better and better received. Having said that they must have their reasons.
 
The video does a good job with the comparing the images, but I don't get the language been used. Feels like sticking to some sony brand guideline by default instead of saying what it is -- saying the numbers don't tell the whole story is true if you use the output numbers, but it's actually about what you'd expect in terms of fuzziness from cutting the horizontal resolution in half temporally.

Like, okay, 3228x1872 sounds lower than 3548x2016, but half of that 3548 (at best) is from last frame, so if youre looking at a scene in fast motion its something closer to 1774x2016 vs 3328x1872... what do you expect? That's 60% as many pixels!

I guess on the other hand, this will be much harder to count for VRS, so maybe using fuzzy language about perceptual image quality is best for comparisons going forward? But still, I feel like with dlss people say the render res, not the reconstructed res.
The interesting thing is that since xsx can render native 1800p why don’t use native 1600~1700p at least for PS5?
 
The interesting thing is that since xsx can render native 1800p why don’t use native 1600~1700p at least for PS5?
Yeah, that's the interesting part. My personal theories are:
a.1- they decided not to optimize the ps5 version much at all. Maybe the pro settings 'just worked' and were good enough. Clearly its not doing perfectly, since there are still some fx based frame rate dips, but why spend development time if you're satisfied?
a.2- Could also be a weird fluke -- maybe this is an engine that happens to be perfect for the series x and it's actually really hard to optimize for ps5, so that'll have to wait for the next game
b. like some posters speculated, maybe its the shared power between cpu and gpu finally taking its toll on the ps5... but I still don't think that seems very likely. (Still can't imagine they'd make a console that's even weaker than it seems on paper yet still so expensive.)
 
There are some obvious issues that are going on with the PS5 DRR/CBR combination within the high-performance mode. Looking back at Alex's prior Avengers video, you can see issues with PS5 DDR/CBR combination in comparison to the PS4 Pro quality-mode (which uses CBR only). From my understanding, the PS5 high-performance mode image quality is supposed to better than the Pro’s quality-mode image quality which isn’t the case at times. I noticed areas of aliasing (see picture below or queued video) within various spots of the video on PS5 high-performance mode, but not visible on PS4 Pro's quality-mode. But as I stated before, wasn’t the PS5 high-performance mode [image quality] was supposed to better than the Pro’s quality-mode?

0T54TLA.jpg

 
Well, yeah, it is! That's a lot of what the GPUs do -- run fragment shaders. If one console is performing the same but rendering half as many pixels, that's a story!

I agree that we should praise image reconstruction techniques -- visuals in games just aren't going to advance without decreasing resolution somewhere, be it temporal (rendering effects with low sample counts and resolving with taa), spatial (vrs, drs), or both (checkerboarding, dlss) -- all these reconstruction techniques are awesome and should be widely adopted. But call a spade a spade -- one version of the game is doing much less rendering work than the other.

Edit: in response to your added paragraph, the thing is, you can see the difference here, and it's obvious why. Moving surfaces, things that have additional low temporal resolution, etc, are all significantly burrier and show rectangular pixels. That's not a mystery of the differing techniques, thats a consequence of rendering half as many pixels each frame.

I kind of agree with this. I do think that Alex called it out for exactly what it is though; Xbox Series X has a nicer image quality even though the output resolution is lower (with a higher internal resolution).

When I hear "post resolution era" I imagine Microsoft and Sony rushing around to improve their upscaling technology.

I think the games a couple of years from now are going to look substantially better than today's games while on the same hardware, with revolutionalised loading.
 
The video does a good job with the comparing the images, but I don't get the language been used. Feels like sticking to some sony brand guideline by default instead of saying what it is -- saying the numbers don't tell the whole story is true if you use the output numbers, but it's actually about what you'd expect in terms of fuzziness from cutting the horizontal resolution in half temporally.

Like, okay, 3228x1872 sounds lower than 3548x2016, but half of that 3548 (at best) is from last frame, so if youre looking at a scene in fast motion its something closer to 1774x2016 vs 3328x1872... what do you expect? That's 60% as many pixels!

I guess on the other hand, this will be much harder to count for VRS, so maybe using fuzzy language about perceptual image quality is best for comparisons going forward? But still, I feel like with dlss people say the render res, not the reconstructed res.
A-HA! I thought so too, so I did wonder why the surprise from Alex in seeing a sharper image on X. The PS5 is rendering fewer pixels for all intents and purposes, hence the IQ hit. Or at least that's how I understand checkerboard.
I think it is me just being a bit purposefully obtuse when I say that wording :D
One reason is because the end resolution and even the internal resolution is not always very important IMO when 2 different ways to generate pixels are being used. For example, 1440p internal res upscaled to 4K with DLSS 2.0 would produce a very different and interesting looking image in comparison to near 1800p with TAA. Think about it. 1440p to 4K DLSS 2.0 vs. 1800p TAA vs. Checkerboard 4K. All have various internal resolutions - but the quality of the end product does not line up with their internal resolutions in ascending order.
Basically, I am trying to just decouple the discussion from the resolution numbers to start talking about image quality again. The numbers are not really interesting anymore to us at DF rather often.
There are some obvious issues that are going on with the PS5 DRR/CBR combination within the high-performance mode. Looking back at Alex's prior Avengers video, you can see issues with PS5 DDR/CBR combination in comparison to the PS4 Pro quality-mode (which uses CBR only). From my understanding, the PS5 high-performance mode image quality is supposed to better than the Pro’s quality-mode image quality which isn’t the case at times. I noticed areas of aliasing (see picture below or queued video) within various spots of the video on PS5 high-performance mode, but not visible on PS4 Pro's quality-mode. But as I stated before, wasn’t the PS5 high-performance mode [image quality] was supposed to better than the Pro’s quality-mode?

0T54TLA.jpg

Good thing to point out.
One thing I said in the video, but did not show examples of, is that I found pixel counts where the PS5 in performance mode was lower res than the PS4 in Quality mode. Makes sense though as it is targetting 60 fps with much higher particle resolution.
 
Last edited:
@Dictator any idea why they just didn't go with dynamic res as xsx but with some lower bands on ps5 ? cb was maybe good for ps4pro but for sure is not good idea for ps5
 
Status
Not open for further replies.
Back
Top