Upscaling Technology Has Become A Crutch

why do you think native resolution is a crutch? Just curious... I ask 'cos some of my colleagues which are AMD fans usually say that they don't like DLSS nor XeSS at all (I told them about DLAA. but they weren't impressed either). They mention that there's nothing like native resolution and that they just prefer to have a hardware that runs games at a native resolution, and that's it. Native is like more pure to them
Native is not "native" in many aspects as already mentioned in the thread so your friends are off base.
 
I just wanted to add that in RT, you don't use shadow passes. You can just iterate through all the lights in the scene from world space and test for light vectors that are blocked from light sources and shade the shadows that way. Not only is this the best solution because it eliminates a separate pass but it is accurate enough to capture shadows of objects that are very small in the world (i.e. forks, spoons, books, etc..) that shadow maps can't capture.
Yes indeed, ray tracing has a one time big upfront cost (setting up the ray setup, the BVH, ... etc) but after that, it provides a near free cost for hundreds of shadows/lights ... we can see that already with MegaLights and RTXDI.

It's also interesting that ray tracing works through tracing both the G-Buffer and the world space, and even though ray tracing relies on geometry data, MSAA still can't work with it because ray tracing has no fixed pixel grid to work with (which is required for MSAA), it just cast rays. MSAA also requires multiple geometry and shading samples for each pixel, which means casting multiple rays per pixel (4x MSAA requires 4 rays per pixel), which compounds the computational costs.
 
If the tradeoff is subpixel shimmering/aliasing or a blurry image and ghosting, then the wrong trade off has been made
No, the tradeoff is no Anti Aliasing at all, as explained above Deferred Shading invalidated MSAA, it can't work with modern lighting systems at all. You need TAA for modern lighting to work. MSAA also can't work with Ray Tracing. If you don't like TAA then you settle for anti aliasing only through super sampling, which costs way too much performance (even bigger than ray tracing).

Or you want to stop using Deferred Shading, relying only on Forward Shading, in that case you will have your MSAA, but you will lose a huge chunk of performance when working with multiple rasterized lights (a requirement for modern graphics), the hit to performance will be even bigger than the ray tracing hit.

Another solution is to work with Forward Shading but rely on ray tracing to do the lighting/shadowing, but that case also doesn't work because MSAA can't work with ray tracing, and because ray tracing relies on temporal accumulation and temporal denoising to work.
 
No, the tradeoff is no Anti Aliasing at all, as explained above Deferred Shading invalidated MSAA, it can't work with modern lighting systems at all. You need TAA for modern lighting to work. MSAA also can't work with Ray Tracing. If you don't like TAA then you settle for anti aliasing only through super sampling, which costs way too much performance (even bigger than ray tracing).

Another solution is to work with Forward Shading but rely on ray tracing to do the lighting/shadowing, but that case also doesn't work because MSAA can't work with ray tracing, and because ray tracing relies on temporal accumulation and temporal denoising to work.
You absolutely CAN combine deferred + ray tracing with MSAA. There's nothing inherently theoretically incompatible incompatible between these techniques ...
Or you want to stop using Deferred Shading, relying only on Forward Shading, in that case you will have your MSAA, but you will lose a huge chunk of performance when working with multiple rasterized lights (a requirement for modern graphics), the hit to performance will be even bigger than the ray tracing hit.
A classic forward shading pipeline might be performance prohibitive when rendering scenes with high light counts but the industry did improve that issue by creating new enhancements/extensions like clustered shading or similar prior art such as AMD's Forward+ pipeline where they employ a light culling pre-pass technique where tight bounds are assigned for tiles/froxels to determine what set of lights contribute will contribute to their space ...
 
You absolutely CAN combine deferred + ray tracing with MSAA. There's nothing inherently theoretically incompatible incompatible between these techniques ...

The problem is that traditional MSAA only do AA on triangle edges, but ray tracing can cause aliasing inside triangles (e.g. shadows, reflections, refractions, complex materials, etc.) thus making it ineffective.
 
You absolutely CAN combine deferred + ray tracing with MSAA. There's nothing inherently theoretically incompatible incompatible between these techniques ...
You can theoretically do anything you want, but MSAA with deferred rendering equals essentially no anti aliasing to 90% parts of the image, plus the destroyed performance.

like clustered shading or similar prior art such as AMD's Forward+ pipeline
Forward+ was an even worse hack that couldn't take off in any serious manner. It complicated the integration of SSR and SSAO, it also has worse performance than Deferred in scenes with many lights, it struggles greatly with overlapping volumetric lights or shadows, transparencies were a pain as well (even more than deferred). It also struggled way more than Deferred in handling complex shadows. Not to mention it needed more memory for storing light lists per tile/cluster, as well as data structures like the light grid.

Worse of all, it requires shaders to compute lighting per fragment using the culled light lists, which can lead to higher per-shader complexity and longer shader compilation times. Performance still degrades when the number of lights affecting a single tile increases significantly, as all affected lights still need to be processed.
 
You're entitled to your preferences for sure but, others prefer different things. While many effects are at 1/2 or even quarter resolution, I'd rather have the option to increase the resolution of said effects in exchange for disabling TAA.

That would be a nice option to have. At least it will help people understand why games look terrible without some sort of temporal AA. I certainly don’t get how someone could prefer a swimming sparkling mess over a slightly blurry image. Shader aliasing is incredibly distracting and immersion breaking. There’s no comparison.

I’ve been using DSR for many years now and even 8K DSR doesn’t address shader aliasing so I don’t think we can avoid temporal accumulation anytime soon.
 
Forward+ was an even worse hack that couldn't take off in any serious manner. It complicated the integration of SSR and SSAO, it also has worse performance than Deferred in scenes with many lights, it struggles greatly with overlapping volumetric lights or shadows, transparencies were a pain as well (even more than deferred). It also struggled way more than Deferred in handling complex shadows. Not to mention it needed more memory for storing light lists per tile/cluster, as well as data structures like the light grid.
Was? Aren't multiple of the best looking games of the year in 2024 (idtech and the call of duty engine(s)) still clustered forward renderers? It has all of the drawbacks you described but those are hardly dealbreakers, normal tradeoffs.
 
Was? Aren't multiple of the best looking games of the year in 2024 (idtech and the call of duty engine(s)) still clustered forward renderers? It has all of the drawbacks you described but those are hardly dealbreakers, normal tradeoffs.

Doom Eternal is definitely unique in that it's fully a forward renderer with no g-buffer. There was a great article a while back that explained how it worked.

It uses a depth pre-pass to I guess eliminate overdraw. Because they take the "uber-shader" approach, where all materials are defined in a few shaders, they can dynamically merge geometry together into fewer draw calls.

Would be interesting to know if Indiana Jones is doing anything differently, because it's presumably a much newer version of the engine and they had to fit DXR ray-tracing in (by default). The Call of Duty games use a visibility buffer, so their approach is quite a bit different than g-buffer or forward rendering, I think.
 
The Call of Duty games use a visibility buffer, so their approach is quite a bit different than g-buffer or forward rendering, I think.
Oh, when did they switch to a new renderer? The last rendering talks and papers I found were from 2021 when they were still using forward +

edit: ah, I glossed over some details -- unless it's changed, they primarily use f+ for opaque but they do use VB rendering for dense stuff like foliage
-- cool talk!
 
Last edited:
Oh, when did they switch to a new renderer? The last rendering talks and papers I found were from 2021 when they were still using forward +

edit: ah, I glossed over some details -- unless it's changed, they primarily use f+ for opaque but they do use VB rendering for dense stuff like foliage
-- cool talk!

Thanks for the vid. I was trying to figure out if I had it totally wrong and was looking for info. Turns out we're both right.
 
The problem is that traditional MSAA only do AA on triangle edges, but ray tracing can cause aliasing inside triangles (e.g. shadows, reflections, refractions, complex materials, etc.) thus making it ineffective.
@Bold How is that a 'problem' in of itself ? AFAICS MSAA arguably solves the biggest aliasing problem these days which is related to the undersampling of geometric edges ...

If you observe all sorts of arbitrary aliasing with ray tracing then the only conclusion remains is that you're undersampling your image. You can apply MSAA on non-geometric features in ray tracing as well ...
Forward+ was an even worse hack that couldn't take off in any serious manner. It complicated the integration of SSR and SSAO, it also has worse performance than Deferred in scenes with many lights, it struggles greatly with overlapping volumetric lights or shadows, transparencies were a pain as well (even more than deferred). It also struggled way more than Deferred in handling complex shadows. Not to mention it needed more memory for storing light lists per tile/cluster, as well as data structures like the light grid.
The concept of a 'hack' involves a solution that's either incomplete or have failure cases both of which doesn't describe Forward+ and your claim of "couldn't take off in any serious manner" won't hold up for much longer when it's also the default rendering mode in Unity 6 URP. Your declaration of "worse performance" does not make it a universal statement ...

Transparencies are much harder in any deferred renderers without having to resort to even more expensive data structures like needing to encode multiple n layers of geometry for your n*G-buffer. Only dithered transparency integrates well with a deferred rendering pipeline but it doesn't work very well outside of things like glass such as potentially heterogeneous media that exhibits complex scattering like clouds, smoke, and fire ...

Having to maintain a local light list is a far easier ordeal (fast generation/low memory bandwidth consumption) than either a G-buffer (high memory bandwidth consumption per render pass) or let alone a BVH (slow generation/update iterations) to the point where the technique can be standardized on a mobile phone!
Worse of all, it requires shaders to compute lighting per fragment using the culled light lists, which can lead to higher per-shader complexity and longer shader compilation times. Performance still degrades when the number of lights affecting a single tile increases significantly, as all affected lights still need to be processed.
I'm not exactly sure what you mean by "requires shaders to compute lighting per fragment" since using MSAA in all cases will trigger per-sample shading regardless of whether your renderer in question is deferred or forward ...
 
Last edited:
@Bold How is that a 'problem' in of itself ? AFAICS MSAA arguably solves the biggest aliasing problem these days which is related to the undersampling of geometric edges ...

If you observe all sorts of arbitrary aliasing with ray tracing then the only conclusion remains is that you're undersampling your image. You can apply MSAA on non-geometric features in ray tracing as well ...

As I said, it's no longer effective. Instead of using MSAA for edges and other methods for the internals, people have deemed it's better to just use other methods for everything.
 
claim of "couldn't take off in any serious manner" won't hold up for much longer when it's also the default rendering mode in Unity 6 URP
Unity is a mobile first platform, as an engine it's behind all other engines in features and performance, I don't see how that qualifies it as serious adoption at all.
 
@Bold How is that a 'problem' in of itself ? AFAICS MSAA arguably solves the biggest aliasing problem these days which is related to the undersampling of geometric edges ...
I disagree here. Geometric edges are easier to solve than shader aliasing -- especially when importance sampling of materials. I can't imagine tolerating a noisy specular highlight that's using a large area light that's required to shade an entire interior room with no other local light sources.

If you observe all sorts of arbitrary aliasing with ray tracing then the only conclusion remains is that you're undersampling your image.
It's not in image that's being undersampled though. You are undersampling the light sources and objects in the world that bounce light.

You can apply MSAA on non-geometric features in ray tracing as well ...
How can MSAA be used when sampling light sources and/or their contributions from surfaces?
 
As I said, it's no longer effective. Instead of using MSAA for edges and other methods for the internals, people have deemed it's better to just use other methods for everything.
Sure other methods maybe preferred but what exactly is the issue you alluded to with the design of MSAA being limited to filtering geometric edges by itself ?

I don't find it contentious in itself to use both general and specialized solutions if a particular problem calls for either ...
Unity is a mobile first platform, as an engine it's behind all other engines in features and performance, I don't see how that qualifies it as serious adoption at all.
Even on consoles or PCs, many VR games use forward rendering and in the near future Epic Games is going to replace their legacy desktop forward renderer with a unified (mobile/desktop) Forward+ renderer. Some modern AAA games use Clustered/Forward+ such as Detroit: Become Human, Doom Eternal, and arguably recent CoD entries ...
I disagree here. Geometric edges are easier to solve than shader aliasing -- especially when importance sampling of materials. I can't imagine tolerating a noisy specular highlight that's using a large area light that's required to shade an entire interior room with no other local light sources.
Agree or disagree, there are other viable techniques to solve specular aliasing outside of using temporal accumulation which by itself isn't adequate in many cases ...
How can MSAA be used when sampling light sources and/or their contributions from surfaces?
MSAA doesn't have any specifically meaningful context in ray tracing because visibility and shading aren't decoupled over there and is assumedly a euphemism to add more samples as opposed to rasterization where we would sample visibility at a higher rate comparatively to shading ...
 
Sure other methods maybe preferred but what exactly is the issue you alluded to with the design of MSAA being limited to filtering geometric edges by itself ?

I don't find it contentious in itself to use both general and specialized solutions if a particular problem calls for either ...

It's expensive in memory and requires special care for some rendering techniques, and it's not really performing better than other methods. Why use it when the upsides are not much better than the downsides?
 
Even on consoles or PCs, many VR games use forward rendering and in the near future Epic Games is going to replace their legacy desktop forward renderer with a unified (mobile/desktop) Forward+ renderer.
VR games are just like mobile games, performance limited and last gen looking. 95% of games on Mobile and VR have no more than a one or two light sources lighting the scene, which is why they chose Forward/Forward+.

Some modern AAA games use Clustered/Forward+ such as Detroit: Become Human, Doom Eternal
I never said no game ever used Forward+, they are just very few and far between compared to the vast majority of AAA titles that rely on Deferred.
 
Back
Top