Current Generation Games Analysis Technical Discussion [2023] [XBSX|S, PS5, PC]

Status
Not open for further replies.
I checked this in the diner area. It was the GI option that produced shadows for all the items on the walls. Can't say anything about the other scenes, as I checked them with RT turned on/off only, and I am currently playing the Alan part of the game. However, it would be clever to approximate GI by using area lights in place of windows (considering the window in GI essentially functions as the same area light). This would result in similar visuals, yet it would be of higher quality with fewer samples and less noise.
The Oceanview Hotel is a great level to showcase the differences in the GI solutions:

Pathtracing@High without direct light.

Like Control Remedy designed their games with a vision in mind and not with the underlaying engine. Here is a great example how even the lowest Pathtracing setting can massivly improve the image quality over rasterizing and only costs 20% on Lovelace - spoiler for people who want to experience this scene in the game:
 
Last edited:
Also, you're confusing direct and indirect lighting. Direct lighting would be under the direct sun rays, outside of the windows in the screenshots, while the indirect lighting is the light that has bounced off other surfaces outside and then enters the room through the windows. There are no other light sources besides the windows in the spots where I took these screenshots.
I checked this in the diner area. It was the GI option that produced shadows for all the items on the walls. Can't say anything about the other scenes, as I checked them with RT turned on/off only, and I am currently playing the Alan part of the game. However, it would be clever to approximate GI by using area lights in place of windows (considering the window in GI essentially functions as the same area light). This would result in similar visuals, yet it would be of higher quality with fewer samples and less noise.
Take a look at the morgue from your screenshots. Everything is clearly visible on that electrical box on the wall, where PT version adds indirect shadows in your screenshots.

First screenshot without any ray tracing. From this angle that shadow is visible, but barely. Sunlight is clearly visible on the right, so it's not sunlight. Indirect character shadow has the same direction. Is it an area light placed here to simulate indirect shadows?
NoRT.jpg

Next screenshot with ray traced direct lighting only. Shadows are now higher quality and well defined.
DirectLighting.jpg

Finally direct lighting and PT. It's hard to spot a difference, but it fills a few gaps between the electrical box and wall removing some of the indirect light there, as if it's filling the gaps between baked data.
DirectLightingAndPT.jpg

All screenshots with a slider:
 
Last edited:
Is it an area light placed here to simulate indirect shadows?
Likely, as RTXDI works only with light sources, and the Sun is in a position that should produce shadows in a different direction.

I've made some further progress in the game and decided to try a different methodology to separate RT GI from prebaked one.
For this, I've tuned the textures down to the lowest possible resolution via the texture LOD bias in the NVIDIA Inspector, as removing the high-frequency textures should help with identifying how GI contributes to the image.
I've also disabled SSAA for RT Off screens so that you can see the RT GI contribution and RT Direct Illumination, which helps to isolate the RT GI from RT DI contribution.

Results are here: Alan Wake 2 RT GI contribution

It is now evident that diffuse RT GI covers large areas and contributes to lighting and shadowing from large objects.
I would conclude that its diffuse GI rays should be cast at least several meters away to capture all the details, so I wouldn't categorize such GI as local only.
It's still added on top of the prebaked GI, so it can't completely fix all the light leaks caused by the low resolution of the prebaked GI.
Glossy GI is quite long range and captures very distant objects.
 
It is now evident that diffuse RT GI covers large areas and contributes to lighting and shadowing from large objects.
I would conclude that its diffuse GI rays should be cast at least several meters away to capture all the details, so I wouldn't categorize such GI as local only.
It's still added on top of the prebaked GI, so it can't completely fix all the light leaks caused by the low resolution of the prebaked GI.
Glossy GI is quite long range and captures very distant objects.
If PT GI would be added or overlayed on top of prebaked GI then it wouldn't be possible for it to make crevices completely black. Here's a good example from your screenshots:
1699452919102.png
Another clue is that for dynamic objects and outdoors PT GI range is larger and those traditionally have lower resolution baked data. It's also hard to notice where PT GI would add light without glossy reflections. Instead it usually only occludes baked GI. Thus my theory that it's a local GI or local occlusion for baked GI data.
 
The differences here are primarily specular, right?
The differences are a mix of specular and diffuse GI.
You can easily spot diffuse GI in places where large texels of shadows are visible with RT Off, since all diffuse GI lightmaps, or whatever they use in the game, are fairly low-resolution.
I attempted to capture the screenshots in such a way that the specular GI would not eliminate or affect the diffuse, as it does at certain angles.

Definitely curious what heuristic they are using to combine things and whether the RT/PT rays sample the baked lighting at hits.
Remedy loves to share technical details at technical conferences, so they might present details at GDC or Siggraph.
 
Why does Ray Reconstruction extremely reduce flickering in specular highlights? When watching videos beforehand I was worried about flickering in Alan Wake 2 but with Ray Recunstruction it's almost gone and the image is very stable. With DLSS alone the game flickers much more.
 
Why does Ray Reconstruction extremely reduce flickering in specular highlights? When watching videos beforehand I was worried about flickering in Alan Wake 2 but with Ray Recunstruction it's almost gone and the image is very stable. With DLSS alone the game flickers much more.

It's part of "ray reconstruction" to begin with. They're trying to apply shading from the new frame to older upscaled frame data. This is to say, DLSS et al. render at 1080p, take 4+ of those frames, and then output at 4k. Instead of just holding onto old frames and guessing a little bit what they might look like now, I'm guessing they use the newest frames 1080p depth and normals and etc. to try and guess what a 4k depth and normals and etc. would look like, then based on the current 1080p shading try to change the old frames color to look like the updated lighting.

This is a guess as Nvidia hasn't specified what "ray reconstruction" does exactly yet, but it's a solid guess.
 

Larian has made progress on reducing memory usage in Baldur's Gate 3 for Series S, which they confirm has been the holdup.
From around 8850 MB to 7000 MB total memory used on XSS. Most of it being CPU ram (which was expected in such a game). They are not even talking about splitscreen mode here which likely will never happen. 8850 MB means the game would not even run on a retail console. Devkits have more memory available so it can run using more memory obviously.
 
Why does Ray Reconstruction extremely reduce flickering in specular highlights? When watching videos beforehand I was worried about flickering in Alan Wake 2 but with Ray Recunstruction it's almost gone and the image is very stable. With DLSS alone the game flickers much more.
@RobertR1 posted a mod in the cyberpunk thread that had some info in the release notes about RR which might provide a little info on why this might be better with RR. In the crudest way to put it, the RR denoiser is better than the ones used without it. I'll put the links here instead of quoting too much.


the nvidia docs he links to.


NVIDIA Real-Time Denoisers (NRD) is a spatio-temporal API agnostic denoising library. The library has been designed to work with low rpp (ray per pixel) signals. NRD is a fast solution that slightly depends on input signals and environment conditions.

NRD includes the following denoisers:
  • REBLUR - recurrent blur based denoiser
  • RELAX - SVGF based denoiser using clamping to fast history to minimize temporal lag, has been designed for RTXDI (RTX Direct Illumination). It uses 30% more memory and 20% slower than REBLUR
  • SIGMA - shadow-only denoiser
 
Why does Ray Reconstruction extremely reduce flickering in specular highlights? When watching videos beforehand I was worried about flickering in Alan Wake 2 but with Ray Recunstruction it's almost gone and the image is very stable. With DLSS alone the game flickers much more.

nvidia jesus talked about it here, somewhere in this video :eek:
 
Last edited:
From around 8850 MB to 7000 MB total memory used on XSS. Most of it being CPU ram (which was expected in such a game). They are not even talking about splitscreen mode here which likely will never happen. 8850 MB means the game would not even run on a retail console. Devkits have more memory available so it can run using more memory obviously.
I’m not seeing the 8850?
they dropped vram usage by 1.5GB.
5530 to 4000
And now its 4000 to 2500

they still have a long way to go, BG3 launched with only dx11 and vulkan. This is their dx12 code base that Xbox will require for good performance.
 
I’m not seeing the 8850?
they dropped vram usage by 1.5GB.
5530 to 4000
And now its 4000 to 2500

they still have a long way to go, BG3 launched with only dx11 and vulkan. This is their dx12 code base that Xbox will require for good performance.
I thought first pic was CPU ram, the other vram.
 
I thought first pic was CPU ram, the other vram.
Its a before and after.

There’s only 1 memory pool on consoles.

edit: though looking back that doesn't make sense becuse the date is on the X axis, Y is the average of the memory usage upon that date.
So on the far right is the latest optimization.

Chart 1: RAM
Chart 2: VRAM allocated -- but I'm only saying this because in the top left it says Top RAM, and what I think says Top VRAM.
Not really sure whether ram is total or cpu.

Regardless it’s 4.7GB + 2.3 = 7GB.
 
Last edited:
Those flickering shadows are minor differences, not major. Most of the times, the differences are just visible in cherry picked scenes. And they are certainly not worth the heavy performance loss.

View attachment 9972

This is the difference you'll see most of the time, and personally I don't care about flickering shadows. If the overall visuals are not improved signifcantly in every scene for THAT performance loss, then it's not worth it. There's an entire video by DF where you can see the differences between maxed out PC and console versions are minor.


Just an absolutely horrible visuals to performance ratio that is completely inacceptable. Same as Ultra Performance mode or 30 FPS on a high end Ampere GPU.

And now compare that to the RTGI found in Metro Exodus Enhanced Edition which transforms EVERY scene to a point where everything looks completely different and runs at 1080p 60 FPS on an RTX 2060. This is what I call efficient. (Before you comment, I'm aware Alan Wake 2 is doing a lot more than Metro and is just performance intense in general, I'm speaking about the raytracing solution here with a comparison point of the older game without RT and the Enhanced Edition)


Why do we just accept how Alan Wake 2 performs with Raytracing? They could easily have built in a hybrid Raytracing mode with reflections as well as some hybrid RT shadows and I guarantee you it would look almost the same as the maxed out path tracing version for a fraction of the performance cost.

And yes, my friend was already using optimized settings and aggressive DLSS presets. 60 FPS was not possible near the end of the game with any sort of Raytracing enabled. The forest from the beginning is not the heaviest scene at all and he played through the entire game.

The game was absolutely not made with RT in mind and you can clearly see that. It was made for the consoles and Nvidia later tacked on a path tracing solution, that only runs well enough on Lovelace (and even that struggles a lot).

Normally I'm a huge fan of any sort of Raytracing, but in this game my clear recommendation is to turn everything off. It's clearly not worth it.
100% agree. AW2 represents one of the worst implementation of path tracing so far. The visual difference between both modes is not "transformative" as some would say and the performance cost it's laughable. In regular gameplay, the differences are not notable to a majority of people. It's impact is frankly insignificant. It's one of those things that is done for doing sake but not out of practicality. People are pointing out shadows and I can't help but chuckle. During regular gameplay, ie focusing on the game, its not even noticeable or noteworthy. Its a bit like VAR in football where refs are debating whether a player's nose was offside or not.

I think devs would do well to remember whom their target audience is for games when trying to balance features and performance. It's certainly not folks like us on Beyond3d who represent 0.0001% of the potential purchasing power and 99% of the graphics complaints.
 
I can't wait for people to learn that the reason why RT and Path tracing is important is not to be transformative to the player.. it's to allow them to replicate how light works and interacts without having to go to enormous lengths to fake it.
 
We're likely going to eventually reach some sort of crossover point where RT just makes more sense. Barring some of "breakthroughs" it's likely we will need RT (and also ML, if we we want to go there) to try truly next gen on the the game development side.

At the same time I think we need to face the practical reality on the end user side (even if it's just psychology/optics) in that the performance cost combined with the cost of hardware does impact the how end users feel about it now. Also given how users can be invested in the marketing involved it can be an emotional topic for some.

It's offline rendering so the performance considerations of course are very different (as well as being a different medium that doesn't have an end user cost issue) but look at why Pixar went to RT -


 
Last edited:
100% agree. AW2 represents one of the worst implementation of path tracing so far. The visual difference between both modes is not "transformative" as some would say and the performance cost it's laughable. In regular gameplay, the differences are not notable to a majority of people. It's impact is frankly insignificant. It's one of those things that is done for doing sake but not out of practicality. People are pointing out shadows and I can't help but chuckle. During regular gameplay, ie focusing on the game, its not even noticeable or noteworthy.

You can argue that the cost of those stable shadows isn't worth it at the moment for this particular game fine, but there's no denying that 'large blocks of pixels blinking in and out of existence' is a particular egregious graphical defect we've just learned to accept as a limitation of shadow maps.

They don't ruin the entire presentation of AW2 for sure, but it's a longstanding defect that is well overdue for being addressed. Throughout the history of game graphics there are a multitude of effects that when taken in isolation, are invariably 'minor' and when solutions are introduced (such as soft shadows) have always been derided as having an egregious cost relative to their benefits. It's when they become commonplace, and especially when combined with other advanced rendering methods, that we come to accept them as necessary to construct a realistic image as a whole.
 
Status
Not open for further replies.
Back
Top