Digital Foundry Article Technical Discussion [2023]

Status
Not open for further replies.
I have set the visibility and the level of detail to the maximum. I hardly notice any pop-ins on foot yet. But I'll have to take a direct look.

It was similar with The Division 1 and 2.
 
Any per-vertex/primitive attributes are only defined for the mesh shader's output as clearly laid out in the specifications. If you looked at the DispatchMesh API intrinsic, there's nothing in there that states that the payload MUST have some special geometry data/layout. The only hard limitation is the amount of data the payload can contain. There's another variant of the function without using task shaders where you don't have to specify a payload at all!
I no longer understand what you are arguing with me about here. One way or another, the rasterizer must know the number of primitives and vertices in a meshlet and the type of primitives, as rasterizers only support triangles, lines, and points. This should be either encoded in the data layout of meshlet itself or calculated at runtime. There are no other options for compatibility with the rasterizers. In order to be compatible with the rasterizers, the meshlet should resemble the internal interstage buffer formats, which can't be arbitrary, period, yet it speaks nothing about compatibility with other FP blocks, the thing I was arguing with you about.

A combination of factors such as the variable output nature of geometry shaders made it difficult for hardware implementations to be efficient and features in the original legacy geometry pipeline such as hardware tessellation and stream out didn't catch on so the idea of mesh shading is a "reset/redo button" of sorts so that hardware vendors don't have to implement them ...
Given your line of argument, I can argue that tessellation and higher geometry density didn't "catch on" early in the generation due to the subpar hardware implementations, which suffered from spillovers into the video memory at higher expansion rates and generally resulted in low performance across many generations of hardware. The irony of the situation is that we still have mesh shaders on some vendors falling short of expectations in virtually every benchmark I've seen so far, possibly because they still have to make a round trip through the memory for the expansion or for other reasons. Maybe history is repeating itself, and the subpar hardware implementation by some vendors is the reason behind the low adoption of mesh shaders so far.

The difference between graphics shaders and compute shaders is that they both operate in their own unique hardware pipelines so there's no way for them to directly interface with each other in most cases.
Thanks for explaining the commonly known stuff I never asked you about. However, I wish you had instead provided proof that tessellation stages can be accessed within the mesh shaders on AMD's hardware to confirm your theories.
 
Also interesting to see the comparison between HW and SW RT and the huge difference that can make in scenes with more extensive RT use.

The software RT implementation is impressive. Hopefully they share more details at GDC etc. Would be interesting to compare with UE5’s SDF tracing.

Avatar’s built in benchmark produces a lot of interesting data including runtimes for individual passes in each frame (depth, gbuffer, rt etc). I was hoping DF would dive into that but alas.
 
The software RT implementation is impressive. Hopefully they share more details at GDC etc. Would be interesting to compare with UE5’s SDF tracing.
However, I would have preferred if their hardware implementation was impressive. Turning just the RT reflections from the Ultra preset to Unobtanium causes performance to drop from 8 ms to 22 ms for reflections only. Like how?
They just have a color per object (Metro style) in BVH, objects don't feature textures, any complex materials, and have very simplified geometry. There are no multiple bounces for lighting in the reflections, and the roughness cut-off is nothing special. Yet, this is something comparable to the performance cost of path tracing at maximum settings in Alan Wake 2. I wonder where all these milliseconds are spent, as the game is pretty basic in the RT department? I've also seen all sorts of light leaking and other RT bugs, which will hopefully be addressed in upcoming patches.
 
However, I would have preferred if their hardware implementation was impressive. Turning just the RT reflections from the Ultra preset to Unobtanium causes performance to drop from 8 ms to 22 ms for reflections only. Like how?
They just have a color per object (Metro style) in BVH, objects don't feature textures, any complex materials, and have very simplified geometry. There are no multiple bounces for lighting in the reflections, and the roughness cut-off is nothing special. Yet, this is something comparable to the performance cost of path tracing at maximum settings in Alan Wake 2. I wonder where all these milliseconds are spent, as the game is pretty basic in the RT department? I've also seen all sorts of light leaking and other RT bugs, which will hopefully be addressed in upcoming patches.

Could it have anything to do with it using DXR1.1 which I understand is suboptimal on Nvidia architectures (while being optimal for RDNA2)?
 
Could it have anything to do with it using DXR1.1 which I understand is suboptimal on Nvidia architectures (while being optimal for RDNA2)?
NVIDIA's RT subsystems are optimal equally for both DXR 1.0 and DXR1.1, for Intel the only optimal path is DXR1.0 and for AMD the only optimal path is DXR1.1, NVIDIA is the only vendor to handle both optimally.
 
Could it have anything to do with it using DXR1.1 which I understand is suboptimal on Nvidia architectures (while being optimal for RDNA2)?
Inline RT is completely fine with nvidia's architectures. I guess a bit of profiling would have been helpful here, but I'm too lazy to spend vacation on this)
 
Last edited:
Could it have anything to do with it using DXR1.1 which I understand is suboptimal on Nvidia architectures (while being optimal for RDNA2)?

DXR 1.1 is fine if used properly. The benefits of DXR 1.0 come into play if you need the driver to handle shader scheduling for you or you want to use proprietary stuff like Nvidia’s SER.
 
It seems like a case of the RT just being super light. The GI seems pretty barebones and not a huge improvement over probe GI.
 
I know probe GI well enough from games and in these games I never saw such a lighting quality as the one from Avatar. Old GI looks much worse and much less accurate than the RT GI in Avatar. Probe GI looks old-fashioned, while the lighting in Avatar looks like something from a new generation. Avatar's Rt GI is light, but not cheap. You've seen how much faster the RT hardware is.
 
Last edited:
I know probe GI well enough from games and in these games I never saw such a lighting quality as the one from Avatar. Old GI looks much worse and much less accurate than the RT GI in Avatar. Probe GI looks old-fashioned, while the lighting in Avatar looks like something from a new generation. Avatar's Rt GI is light, but not cheap. You've seen how much faster the RT hardware is.
Avatar is better, just not substantially so IMO. Metro does GI far better as an example.
 
Avatar is better, just not substantially so IMO. Metro does GI far better as an example.

What specifically is Metro doing better?

It's also important to remember Metro is also a substantially worse looking game that screams 'last gen' and wasn't exactly a looker when it released.

I'm sure it's sparse environments in relation to Avatar are easier to light with GI.
 
What specifically is Metro doing better?

It's also important to remember Metro is also a substantially worse looking game that screams 'last gen' and wasn't exactly a looker when it released.

I'm sure it's sparse environments in relation to Avatar are easier to light with GI.
Avatar has a lack of consistency with areas more often looking improper and flat with light bounce not being quite correct.
 
As is DF tradition, we round off the year with a look at the team's personal picks for the best game graphics of 2023. Covering off the very best in both PC and console rendering, Alex Battaglia, John Linneman and Oliver Mackenzie share their thoughts on the most impressive visuals of the year, leading to an open debate on which title did it best: Alan Wake 2, Avatar: Frontiers of Pandora or Cyberpunk 2077's RT Overdrive?

0:00:00 Overview
0:01:55 Honourable Mentions
0:16:14 Star Wars Jedi: Survivor
0:21:28 Final Fantasy 16
0:27:10 Robocop: Rogue City
0:32:56 Resident Evil 4
0:40:33 Hi-Fi Rush
0:45:45 Super Mario Bros. Wonder
0:49:45 Marvel’s Spider-Man 2
0:55:50 Cyberpunk 2077 RT Overdrive
1:02:57 Alan Wake 2
1:12:29 Avatar: Frontiers of Pandora
1:22:43 Top Three Debate
 
As I said before these games were also clearly in my the top 3. I find it exciting when I see these 3 games and discuss them because they drive technology and visuals so much forward.

But I would have put Cyberpunk 2077 in 1st place. In 2nd place, I would also have difficulty deciding between Avatar and Alan Wake 2.
Alan Wake 2 comes closest to the look of real feature films or reallife and has almost no visual flaws but Avatar is more ambitious, bigger and runs faster and also looks very good on the consoles. Avatar has a more gamelike look but also more wow-moments. It could be more ambitious when it comes to lighting. The huge moon is not an area light and only a point light. As a result the flying rocks sometimes block all the direct light what wouldn't be the case in Cyberpunk 2077. Path Tracing Avatar would be too good.

I liked what Oliver said about the consistent lighting in Cyberpunk 2077 all those annoying artifacts are just gone. Almost perfection. The only problem that many games have too is that structures in the opposite direction are sometimes faded out resulting in shadow pop-ups.

It's also true that the GI in Dead Space isn't that good. I find the contrasts too strong. Callisto Protocoll is much better lit.

In my point of view Guerrilla Games should made another Killzone.

Avatar is better, just not substantially so IMO. Metro does GI far better as an example.
I haven't dealt with Metro for too long to say anything about it.

If other games also had the RT GI of Avatar I'd would be very happy . I'd play The Division 2 again in a heartbeat if it had such a GI. Which I won't much with the current GI though.
 
Last edited:
Glad to see Hi-Fi Rush on the list. Might be the prettiest cel-shaded game ever and they absolutely nailed that Saturday morning cartoon look.

Top 3 was pretty obvious. Wasn't expecting to see Ubisoft make such a late push but they've always had very talented teams when it comes to graphics technology. If only their game designers and writers were up to par, they'd have games rivaling ND and CDPR.
 
Glad to see Hi-Fi Rush on the list. Might be the prettiest cel-shaded game ever and they absolutely nailed that Saturday morning cartoon look

Its not the first game to do it, but the pulsing of background scenery to the music was beautifully kinetic as well.
 
Avatar has scenes that just look weird to me. Pretty much a lot of the scenes with fog at night has raised blacks so all of the contrast gets lost. I'm actually not a person that likes super contrast (I played the Destiny 2 demo and it turned me off the game), but I think it's part of the reason why this game looks very flat. It's too low contrast.

1703275161850.png

There's also stuff like this which I think just looks bad. It looks wrong somehow. The rock faces look flat. Without the jungle in the distant background I could be convinced this was from a very old game.

1703275286965.png

But then you get a ton of stuff that looks like this which looks fantastic. If you told me this screenshot was from a much newer and totally different game, I'd believe you.

1703275456102.png

Edit: I'll add that I haven't played the game, so I'm only going by what's provided. I've never seen it in HDR, so the presentation of this stuff could look much different and better. There's a lot of variables. Overall Avatar is super technically impressive. I'd never doubt that. I just see certain scenes that I don't think look good, which can be as much about art direction as technology.

Edit: I'll also add that I'm viewing these images on a VA panel with something like a 2400:1 contrast ratio, which is better than the vast majority of TN or IPS monitors, but not as good as a VA television, and nowhere close to an OLED. Looking at these images on an OLED (which I think most of the DF guys do) might give enough contrast that they look even better. There's so much going on when we look at the same images. My display is basically spot-on sRGB, so even viewing with flat 2.2 I think would add a bit more contrast. There's a lot going on.
 
Last edited:
Status
Not open for further replies.
Back
Top