Current Generation Games Analysis Technical Discussion [2020-2021] [XBSX|S, PS5, PC]

Status
Not open for further replies.
Personally I see value in VRS, but it probably won't be truly leveraged until later in the generation, once games get more ambitious in scope and need to squeeze out as much perf as possible. Preferably you use a technique like VRS on parts of the frame that aren't going to be visible to the player and have no major impact on visual quality or fidelity when implemented.

Like any technique if it's used skillfully it will absolutely make a difference, doesn't even matter if the impact it gives in performance gains is "less" than other techniques, they all add up.



Lol that's exactly how not to use VRS xD; things like faces or I'd argue anywhere on the player character model, it's probably best to not use VRS there because it will be more noticeable especially with time. If it's being done on things that aren't of significance that always seems like the better (and would argue, the intended) purpose.

Maybe the reason VRS & Async compute aren't playing friendly together with these games, on Nvidia cards no less, sounds like driver incompatibility issues. As far as what tier of VRS Series X is using, how many tiers are there exactly to VRS? MS mentioned Series systems supporting Tier 2, but is there a Tier 3 already? If they are supporting the full suite of RDNA 2 features and there's a Tier 3 but by some chance MS aren't using it, well that'd be a bit of a face palm imo.

So at least regarding Series X performance with VRS in Hivebusters I'm more leaning with it being slight lack of time optimization and possibly something with the engine that's causing a bit of incompatibility requiring some tuning. Gears 5 had some odd issues running on Series X as Digital Foundry pointed out a while back, things the team was aware of and ended up fixing with an update, though I don't exactly remember what they specifically did to fix them. Could be something similar they would need to do with slightly more optimization of VRS in the Gears 5 engine and, obviously, the DiRT 5 engine as well, which would probably need even more work given its status.



Gonna have to disagree on this one. By that notion, foveated rendering would fall in a similar ballpark, but IMO it doesn't. Any technique needs careful usage in order to leverage the benefits, some just require more skillful handling than others. Part of that is also impacted by time of the software development.

As well IIRC the sharpening filter was mentioned in reference to Series S, but not Series X. I could be wrong, but when MS rolled out the Series S reveal and in a couple of Jason Ronald's subsequent interviews they specified an image sharpening filter for that hardware, but I've not seen any mentions of similar for Series X.
It has being mentionned by NXGamer in his last video starting at 6:20. The sharpening filter artefacts are easily noticeable. But the textures are still higher quality on PS5. NXGamer explains it's because of the blurring effect of VRS that is used extensively on the Xbox game.

 
As DF recalled in a wolfenstein VRS video, its the perfect feature for consoles. I dont think their wrong, VRS when well implemented can help grealy in performance with no visible reduction in graphics. I place it where DLSS is at the moment, performance increases we really need but no huge visual impact.
Its way to early to deduct it being a feature that one could live without. NXgamers view on it doesnt surprise anyone does it?
 
Even though I am a PS5 owner, I do not see the value of having sharper dirt track textures when you pause the game and look at it frame for frame. I would be up for videos like this having a segment/section about what do you actually see/notice while playing. And not when you stop a inspect behind every dust bin you can find.
All games are doing approximation and cheats, if the gameplay/fun element is not suffering for a bit blurry ground textures then it really shouldn't matter.
I would rank gameplay/fun, fps smoothness and then those last bells and whistles.

What I think is funny, is that now 1080p is a blurry mess, last gen it was the holy grail of sharpness, that few games would be at 100% before the midgen refresh gens came.
 
VRS tends to work better in darkly shaded areas where detail isn't very visible to begin with. So, games with more moodier and darker environments can really benefit from VRS. Games with lots of bright or high contrast areas can highlight certain limitations of reduced shader quality, especially around geometry edges which can give the appearance of more aggressive aliasing or muddier/softer appearance across certain textures. As such, VRS has its limitations and/or tradeoffs like any other GPU feature... nothing is perfect.
 
Last edited:
NXgamers view on it doesnt surprise anyone does it?
Maybe I need to go back and watch the video.
But I don't recall him being negative about VRS as a feature, just the implementation/use.
Which if he's seeing a degeaded image sounds like a reasonable observation.
Even then that's all it was, an observation.

As you say VRS is a good feature, it's just early days.
 
A feature is good only if it can shows in real application it's good. With those exclusive games it's fine to call it good because we don't really see all the disavantages of VRS (ignorance is bliss). With the same game without VRS, now we can see the disadvantages in a whole game and many scenes (and not in a few cherry picked screenshots).

And by the way I think Gears 5 is far from being the best looking game, even on Xbox, mainly because it looks blurry. Which was my main complaint of Halo Infinite with its specific implementation of VRS.
 
A feature is good only if it can shows in real application it's good. With those exclusive games it's fine to call it good because we don't really see all the disavantages of VRS (ignorance is bliss). With the same game without VRS, now we can see the disadvantages in a whole game and many scenes (and not in a few cherry picked screenshots).

And by the way I think Gears 5 is far from being the best looking game, even on Xbox, mainly because it looks blurry. Which was my main complaint of Halo Infinite with its specific implementation of VRS.

True. Same for all other features we basically only heard about/exclusive games.
 
A feature is good only if it can shows in real application it's good. With those exclusive games it's fine to call it good because we don't really see all the disavantages of VRS (ignorance is bliss). With the same game without VRS, now we can see the disadvantages in a whole game and many scenes (and not in a few cherry picked screenshots).

And by the way I think Gears 5 is far from being the best looking game, even on Xbox, mainly because it looks blurry. Which was my main complaint of Halo Infinite with its specific implementation of VRS.

TBF though, you're using DiRT 5 as the measuring stick here. A multiplat arcade racer, a game with not a major budget, and a game that up until its release on next-gen platforms was basically (somewhat unfairly) ridiculed for its earlier gameplay presentations.

There really is no such thing as a fundamentally bad technique though, which is what you're making out VRS to sound as. If the technique were so poor, why would companies like AMD have invested R&D into not only the early implementations, but also furthering its development? These companies have access to way more data points and hands-on with the tech than many of us, so if it's a technique they feel worth including with their hardware, it must have some merit.

Also I don't understand the reason to write off 1P games that use such techniques well. Generally speaking the bastions of showing off how to leverage various technologies and techniques in gaming historically comes from 1P game efforts. It's been that way since Nintendo, since Sega, since PS1 and onward. 3P games, especially relatively smaller efforts, generally aren't the barometer with regards this, usually for good reason.
 
A feature is good only if it can shows in real application it's good. With those exclusive games it's fine to call it good because we don't really see all the disavantages of VRS (ignorance is bliss). With the same game without VRS, now we can see the disadvantages in a whole game and many scenes (and not in a few cherry picked screenshots).

And by the way I think Gears 5 is far from being the best looking game, even on Xbox, mainly because it looks blurry. Which was my main complaint of Halo Infinite with its specific implementation of VRS.

Gears 5 looks blurry?

Errr... Does it?
 
Worth noting that VRS can be used to increase shading quality, and reduce aliasing, without increasing resolution.

Targeting your calculations at the pixels that need them most is fundamentally a great technology, and it is unquestionably part of the future.

Yep, it's a really fine-grain way of deciding what parts of the frame need what levels of image precision. Maybe some people are put off by the technique because they think it's still only a Tier 1 implementation in terms of availability, but Tier 2 has been present for a bit and (I'm actually asking here) Tier 3 might already be available?

Does anyone know which of the two "cost" more in implementation, between VRS and DLSS? As in developer labor? I see people in other places still thinking DLSS is "free" without knowing the developer has to program the functionality to support in the game on their own end. Can something like DLSS really become lower "cost" over time though to the point it could be implemented similar to supporting lower-end graphical features that don't require a lot of targeted labor from devs?

It's what I've been hoping techniques like DLSS can eventually arrive at, to genuinely huge advancements as a standard. I think techniques like VRS and foveated rendering have more or less shown their merit, outside of less-optimized examples. But I really want to see how VRS and AMD's Super Resolution (and in particular, Microsoft's use of that since it will be hardware-accelerated through DirectML) end up working in conjunction and if that can match Nvidia's DLSS work.
 
Yep, it's a really fine-grain way of deciding what parts of the frame need what levels of image precision. Maybe some people are put off by the technique because they think it's still only a Tier 1 implementation in terms of availability, but Tier 2 has been present for a bit and (I'm actually asking here) Tier 3 might already be available?

Does anyone know which of the two "cost" more in implementation, between VRS and DLSS? As in developer labor? I see people in other places still thinking DLSS is "free" without knowing the developer has to program the functionality to support in the game on their own end. Can something like DLSS really become lower "cost" over time though to the point it could be implemented similar to supporting lower-end graphical features that don't require a lot of targeted labor from devs?

It's what I've been hoping techniques like DLSS can eventually arrive at, to genuinely huge advancements as a standard. I think techniques like VRS and foveated rendering have more or less shown their merit, outside of less-optimized examples. But I really want to see how VRS and AMD's Super Resolution (and in particular, Microsoft's use of that since it will be hardware-accelerated through DirectML) end up working in conjunction and if that can match Nvidia's DLSS work.
Tier 3 VRS isn't a thing, the tiers describe the different features that can be used from the DX12U spec, link below. Tier 1 is basically VRS basic, and Tier 2 is VRS advanced.

Implementing VRS as far as I understand is done on at the engine level. And irrc, Alex from Digital Foundry said that Microsoft has made it easy to implement in an engine, taking a few days of dev time.





https://microsoft.github.io/DirectX-Specs/d3d/VariableRateShading.html
 
Tier 3 VRS isn't a thing, the tiers describe the different features that can be used from the DX12U spec, link below. Tier 1 is basically VRS basic, and Tier 2 is VRS advanced.

Implementing VRS as far as I understand is done on at the engine level. And irrc, Alex from Digital Foundry said that Microsoft has made it easy to implement in an engine, taking a few days of dev time.





https://microsoft.github.io/DirectX-Specs/d3d/VariableRateShading.html

That's great in terms of time cost for devs then. So once it's implemented, what is the weight of targeted coding for VRS versus other GPU functions? Basically what's the level of volume in lines of code for use of VRS in the game, relative complexity etc? It'd be really good if the costs there are similarly small as for the implementation, as that'd encourage more usage when required.

The only other factor would be developer skill and creativity in applying the technique itself, but that is outside of the control of the technique's existence, AMD and Microsoft :p
 
First, as stated in the conclusion, there's a place for both PCs and consoles and it's not as simple as a price-for-price comparison. Still, as an exercise in understanding the graphics capabilities of a modern game console (the PlayStation 5 was used here), we set forth on matching the Sony PlayStation 5's graphics settings against a PC, the latter of which has individual controls for various in-game options. Once we matched the settings to be roughly equal, we ran benchmarks against GPUs (and one CPU) until we found a performance threshold roughly equal to the average FPS / frametime performance of the PlayStation 5.
This gives us a relatively like-for-like comparison to better understand what the PS5 is comparable to -- it's about a GTX 1060, 1070, and 1080, depending on game, when not running ray tracing. Ray tracing would be something we need to test once more games support it on the consoles. Testing includes a grouping of three games, but the overall concept extrapolates to other games as well. We've established a range of performance to estimate rasterization performance at equal graphics quality (roughly) between the devices.
Again, we think the PlayStation 5 and other consoles fill an important role in the market that can't be simply replaced by a PC, and likewise, a PC offers a lot of extensibility not given by a console. They both have an important place and thus this is more of an academic exercise in determining the performance and quality equivalence.

TIMESTAMPS
00:00 - PC vs. PS5 Graphics Comparison & Benchmarks
01:18 - Methodology Basics & Discussion
05:01 - Devil May Cry 5 Graphics Comparison (PC vs. PlayStation 5)
06:53 - Performance & Framerate Modes (120FPS, 60FPS)
08:35 - PlayStation 5 & PC Benchmark Charts
09:16 - Frametime Plot vs. PC (DMC5)
12:07 - 120Hz vs. 240Hz Explanation
13:23 - Screen Tearing Explanation on Console vs. PC
17:29 - Logging vs. Capture Methodology & Data
19:50 - DiRT 5 Matched Graphics Settings
22:55 - DiRT 5 PS5 vs. PC Performance Analysis & Matched Hardware
23:36 - DiRT 5 120Hz vs. 240Hz
24:03 - Frametime Chart (DiRT 5)
24:39 - Borderlands 3 Matched Graphics
27:00 - BLANDS 3 Performance Benchmarks
27:06 - BLANDS 3 120Hz vs. 240Hz
27:35 - Borderlands 3 Frametime Chart (PS5 vs. PC)
28:22 - Conclusion: A Place for PCs & Consoles Alike
 
Last edited:
This gives us a relatively like-for-like comparison to better understand what the PS5 is comparable to -- it's about a GTX 1060, 1070, and 1080, depending on game, when not running ray tracing. Ray tracing would be something we need to test once more games support it on the consoles. Testing includes a grouping of three games, but the overall concept extrapolates to other games as well. We've established a range of performance to estimate rasterization performance at equal graphics quality (roughly) between the devices.

Woah, the 10 series? He didn't mean the 20 series? And that's not taking RT performance into account? Wonder if he has done similar comparisons between Series X and PC and came across similar, well, surprising (to me anyway, unless he actually did mean 2060, 2070 etc.) performance results. o_O

EDIT: Nope, that's definitely the RTX 10 series in the results. So...did AMD overhype even some of the rasterization performance of RDNA2 here or, am I underselling the RTX 10 series in comparison to the RTX 20 series?
 
Nice video, seems accurate.

Conclusion: A Place for PCs & Consoles Alike

Very true. Without exclusives, theres not all that much reason aside from price/perf somewhat close to launch. Its scary to see more and more going to pc though, and the sony boss comments on that.
 
Status
Not open for further replies.
Back
Top