Digital Foundry Article Technical Discussion [2021]

Status
Not open for further replies.
@Dictator
Hey Alex, is it possible to look at wide screen resolution performance? Also I'm not sure if Xbox Series consoles supper widescreen resolutions like PC (edit: they do not)? But that would be an interesting comparison for a game like this since ultra wide setups are supported natively.

Some OP shit in MP =P I wanted to pull a trigger on a new monitor to abuse this, but I didn't have the guts to.
 
Last edited:
Some of the review scores seemed to factor in the Multiplayer and deducted points because of the cosmetics and battlepass and playlist, the delay of forge, and the delay of co-op. I'd say the HI review scores are more impressive than what the overall numbers show.

Well, that's what I mean by polishing. :)
 
Even with Nvidia's history it's certainly not expected.

Series-X is ~RTX2080 level which is 25-30% faster then a GTX1080.

So it is definitely expected.

Even the RTX2070 (~PS5's level) is 15-20% faster then a GTX1080.

So if a GTX1080 matched or beat performance offered by XSX and PS5 then there's something seriously wrong.
 
I'm weary of any press release claims like this. nVidia claims something like 20-30% performance also IIRC, but most games that support VRS are in the 5-10% range.

The problem is that layman like us only ever get to see Average framerates or perhaps 99th percentile frame rates, etc. Occasionally we may get to see minimum and maximum framerates. But none of those tells us what is going on frame by frame or what the cost (or cost savings) are of any given feature on a frame by frame basis.

As Iroboto mentioned, developers like VRS because it's predictable (unlike trying to mitigate general performance hiccups due to say player count, dynamic combat effects, etc.).

What does that mean? That means that in any given frame you may or may not use VRS enough for it to have a noticeable effect on your overall framerate, but it can have a potentially large effect on problem sections in a game resulting in a smoother overall experience.

Think of it another way. Checkboard rendering is on ALL the time on ALL the pixels, so regardless of whether a particular frame "needs" the performance savings offered by checkerboard rendering, it is there ... along with all attendent artifacts. So the performance savings is large because it's on ALL the time on ALL the pixels regardless of whether it is needed or even desired.

OTOH - VRS is applied to as much or as little of any given frame as the developer wants to specifically target any area's of a game that might have a performance "hole". So, instead of degrading the entire image, VRS has the ability to target specific parts of a scene which means the developer is in control of what parts of a given scene see any potential artifacts. Thus for any given frame it's contribution is anywhere from Zero to Large because unlike something like Checkerboard rendering, it can be enabled or disabled whenever the developer wants in any given frame in only part of a frame. So the performance savings is highly variable from a user point of view, which will be reflected in low average FPS increases or decreases when you toggle it on or off because VRS will only be used when the developer wants to use it and only on the pixels the developer wants to use it on. From the developer POV, they now have a tool that can given them up to 15-30% performance savings IF they decide that a given scene requires help, but they don't have to use it in every single frame or on the entire frame, thus it's a very predictable and reliable performance savings tool from a developer's POV.

That said, just like checkerboard rendering (or many other cost savings techniques) better implementations by better developers will result in less artifacting. Lazy implementations or even worse automatic implementations will result in more artifacting. That said, a bad or even average checkboard rendering implementation will have noticable artifacts across the entire screen on every frame. A bad or average VRS implementation will have it effect only part of a frame and may not even be used on all frames.

Regards,
SB
 
Last edited:
Gpu performance in halo in my opinion is just plain bad on nvidia cards. Load up a small arena map in the academy and an rtx 3080 won’t be able to maintain 120 fps on the high preset. Watching the video the open world performance on mid range cpus looks horrible. That’s an automatic skip for me if you can’t bring it up significantly higher with low settings. The game really doesn’t look that nice for the performance. I’ll take across the board low settings for better frame rates any day, but that’ll probably be a lot more compromising than in multiplayer.
 
Last edited:
I think it's worth remembering that if you're not pixel shader bound VRS ain't getting you anything.
yes and no =P.

If you use VRS, you do less work, your shader time drops. assuming you don't have a hard bottleneck somewhere in the pipeline, this should reduce frame time.
 
yes and no =P.
Yes I have to agree, its one of those things that in theory you arent on paper, but in reality its different when you measure.
Whats really gonna improve times is when they track our eyeballs, see what we are looking at, chuck RT GI supersampling etc, i.e. the works on that 5% of the screen, the other 95% just do ps2 level
 
yes and no =P.

If you use VRS, you do less work, your shader time drops. assuming you don't have a hard bottleneck somewhere in the pipeline, this should reduce frame time.

But VRS only works and improves performance when you're pixel shader bound.

So you only do less work when you're reducing the load through VRS.
 
I think it's worth remembering that if you're not pixel shader bound VRS ain't getting you anything.
GPU compute is used for more than just shading pixels. If you can claw back some compute time by shading less pixels, you can (possibly) use that compute to do other tasks.
 
GPU compute is used for more than just shading pixels. If you can claw back some compute time by shading less pixels, you can (possibly) use that compute to do other tasks.

Isn’t VRS a rasterization only thing? Compute shaders don’t know what “pixels” are.
 
But VRS only works and improves performance when you're pixel shader bound.

So you only do less work when you're reducing the load through VRS.
Describe being 'pixel shader bound' with respect to the context in which you understand it? I'm not sure if we are aligned or communicating the same thing.

https://devblogs.microsoft.com/directx/gears-vrs-tier2/

You can see the VRS savings in the table below for each phase.
N4zLct2.jpg
 
Describe being 'pixel shader bound' with respect to the context in which you understand it? I'm not sure if we are aligned or communicating the same thing.

https://devblogs.microsoft.com/directx/gears-vrs-tier2/

You can see the VRS savings in the table below for each phase.

Also from that same blog:
Is it worth implementing Tier 2 VRS for my game?
Every engine is different and not all games will benefit equally from VRS. There are 2 things to keep in mind when evaluating VRS:

  1. VRS is an optimization that reduces the amount of pixel shader invocations. As such, it will only see improvement on games that are GPU bound due to pixel shader work.
 
Also from that same blog:
It's weird, for some reason when I read your comment, I had an idea you were referring to this, but at the same time I thought you were referring to something completely different.

Like for me it read your OP it sounded it wasn't going to be used a lot, but in reality, you didn't say anything wrong at all. In my mind, VRS could effectively shave down the shading rate so much that it moves the bottleneck away from pixel shaders to another part of the GPU. And I think, I guess I took it for granted that the majority of the time most games are often pixel shader heavy (therefore bound), except in the off case where you hit ROP/bandwidth limits.
 
Gpu performance in halo in my opinion is just plain bad on nvidia cards. Load up a small arena map in the academy and an rtx 3080 won’t be able to maintain 120 fps on the high preset. Watching the video the open world performance on mid range cpus looks horrible. That’s an automatic skip for me if you can’t bring it up significantly higher with low settings. The game really doesn’t look that nice for the performance. I’ll take across the board low settings for better frame rates any day, but that’ll probably be a lot more compromising than in multiplayer.

1080 performing worse isnt really all that suprising, considering how bad its running on non amd-products.
 
This is a 30 second capture of Halo Infinite in an training room version of Fragmentation map with no bots. Everything set to low or off at native 1440p (except textures are ultra). RTX 3080, ryzen 3600x, 32GB DDR4 dual-rank 3600 MHz cl14.

Can you identify exactly when I'm pressing the W key to walk forward? Lol, this game is so f'ed up.

upload_2021-12-7_16-20-11.png

And here with textures set to low. Different spot on the map so you can't compare frametimes directly, but the symptoms are the same. Keep in mind, I'm pressing the W key for about a second or two, so walking maybe 5 feet in game. Something wrong with the streaming engine. Installed on my Samsung 970 evo plus nvme.

upload_2021-12-7_16-27-18.png

And here is 51% renderscale, everything low off including textures. Same prob. About 235 fps standing still, press W key to walk forward and get a 10-20 fps drop, even if you press the key for 1 second.

upload_2021-12-7_16-34-7.png
 
Here I am pressing the W key with @Dictator 's optimized settings at 1440p. I am literally just walking in a straight line, pressing the W key on for a second just to see the frames drop. My advice for halo to have really consistent performance is to just never ever move your character. Just stand still and enjoy the view.

upload_2021-12-7_16-47-13.png

Standing still, moving my mouse to look in random directions with @Dictator 's settings. Native 1440p.

upload_2021-12-7_16-51-13.png

Here's a small map "Streets" with all settings low/off including textures at 1440p. Horrible frame spikes any time I move. This was done in an enclosed hallway behind the fountain where the C capture point would be. This is a CPU problem, because my utilization on the GPU is only 85-90% in this area of the map. I believe it's trying to stream as needed, or very small tiles of the map, and it's waiting on the cpu or disk to stream data.

upload_2021-12-7_17-8-48.png
 

Attachments

  • upload_2021-12-7_16-38-18.png
    upload_2021-12-7_16-38-18.png
    145.5 KB · Views: 14
Last edited:
Is that capture tool using the GPU to do any kind of encoding? Asking because I remember a few streamers that had very odd hitching with Deathloop and it was tracked back to having Nvidia GPU encoding enabled. Once they stopped using that, it became significantly smoother.
 
Is that capture tool using the GPU to do any kind of encoding? Asking because I remember a few streamers that had very odd hitching with Deathloop and it was tracked back to having Nvidia GPU encoding enabled. Once they stopped using that, it became significantly smoother.

No, it's using PresentMon to capture frametime statistics and has a custom hardware monitoring library. Not capturing any video or encoding.

Edit: incorrectly attributed frametime capture to rtss.
 
Last edited:
Status
Not open for further replies.
Back
Top