Upscaling Technology Has Become A Crutch

I can't really comment on the software side. However it is certainly a crutch over on the hardware side. I feel that Nvidia has based the entirety of the 40x0 series sales on the back of DLSS 3.0.

I have a 3080 and have used dlss and I always choose native over dlss unless I have no choice but to use it. I haven't found an instance of where dlss looks better than a higher native resolution.

Considering AMD gpus lack the hardware necessary for DLSS and they don't vastly outperform nvidia at native, I don't really see how it could be a crutch. And if anything that hardware is highly profitable for nvidia with all of the ai stuff. I thought DLSS looked awful at 1080p, but at 1440p DLSS quality looks good and I pretty much always turn it on. Some TAA implementations are bad, and in those cases I think DLSS can look better. Balanced doesn't look great though. Haven't touched 4k. On the software-side you can follow the research and there's a ton of stuff being worked on to render faster, or render more complex graphics. The fact that so many graphics researchers are looking at stuff that samples over time, rather than just spatially, tells me that it's fundamentally necessary, and that "native" rendering is dead.
 
Considering AMD gpus lack the hardware necessary for DLSS and they don't vastly outperform nvidia at native, I don't really see how it could be a crutch. And if anything that hardware is highly profitable for nvidia with all of the ai stuff. I thought DLSS looked awful at 1080p, but at 1440p DLSS quality looks good and I pretty much always turn it on. Some TAA implementations are bad, and in those cases I think DLSS can look better. Balanced doesn't look great though. Haven't touched 4k. On the software-side you can follow the research and there's a ton of stuff being worked on to render faster, or render more complex graphics. The fact that so many graphics researchers are looking at stuff that samples over time, rather than just spatially, tells me that it's fundamentally necessary, and that "native" rendering is dead.
I am talking about the 40x0 series vs the 30x0 series. The only selling point is dlss 3.

I had a 1440p ultrawide and a 4k monitor and dlss looks awful on both of them with my 3080. The image quality always looks worse and its extremely noticable in diablo right now.
 
I'm super curious about impressions of remnant 2, so I've been watching some vids. This one is great entertainment.


This guy says he doesn't want better graphics, he just wants performance, and then gets mad that he has to set the graphics to low to increase performance and leaves it on high anyway. Lol.

Ok, so I downloaded the game and went to the area he's showing in the vid. I was facing down the ramp towards some buildings, which is actually not the worst performance in the area, but I'll get to that.

I have a ryzen 5600x, an RTX 3080 10GB and 16GB of RAM. I got bored so I stopped testing the medium, high on some of these.

settingslowmediumhighultra
native 1440p60505044
DLSS Quality97797470
DLSS Balanced107938879
DLSS Performance1171059889
DLSS Ultra Performance137127120108
all low, vary shadows, DLSS Quality97928881
all ultra, vary shadows, DLSS Quality80767468
all low, vary post, DLSS Quality96--91
all low, vary foliage, DLSS Quality96--96
all low, vary effects, DLSS Quality96--86
all low, vary view distance, DLSS Quality96--96

There are actually a bunch of views that are cpu-limited for me if I had the game set to low. There's a warehouse that my fps would drop and my gpu would go down to about 60% utilization. Guessing there's some culling stuff going on, a lot of draw calls and my cpu gets hit. The view I tested above I was basically GPU-limited unless I was using DLSS ultra-performance or DLSS performance (around 96% gpu).

Setting shadows low doesn't seem to remove all shadows as I had thought from some pictures. Haven't totally figured that one out. Shadows has the biggest hit, but effects and post processing have pretty big performance implications as well. Effects is a bit of a weird one, because in the particular view looking down that ramp I don't know what was changing to make that performance change. Kind of the same with post processing and shadows, to be honest. I'd have to compare side by side. I think the differences would be more obvious in other scenes that have reflections etc.
 
There are hints of truth in here, but I think the ultimate conclusion is erroneous. I believe many studios today have less code craftsmanship than similar studios would have in the past. I do believe many contemporary games could look a lot better than they do had their engineers been more competent. But I don't think that reality is caused by the access to shortcuts such as frame reconstruction and such.

In a parellel universe where these "crutches" were never created, these devs would not have magically become technical wizzards all of a sudden. They'd just have even worse performance, resolution, or downgraded render settings and/or asset quality to compensate.

The reasons for lower technical compentency, I believe, are unrelated to the existance of upascaling tech. In the end, these shortcuts are more of a saving grace, as others have said here, than a crutch.
 
Well that's my point for years now. Especially "visible" after the ps360. That was the point when memory got cheap and frameworks arised from everywhere. On PC it started a bit sooner. Just think a bit of what was possible with the ps360 gen. Those had 512mb of memory. The OS only needed a few megabytes of that (if I remember correctly it was 10-12mb on the x360). Today over 2gb is reserved just for the OS.
The memory usage just inflated as it was available and nobody needed to optimize for a few megabytes of memory.
Yes this made things much easier and even some things possible that were not possible before, but the trend continued. Especially with textures these days. On "ultra" they look good, but one step lower and they get blurred as nobody looked at them before.

But all those problems could be fixed with enough time l, but "time" costs a lot a money and so no one really wants to fix those issues.

A good example for that is e.g. Skyrim or gta5. Both were on 3 console generations and made visual progress, but the cost for this "small" jump areany, many resources that weren't needed on the ps360 consoles (sure the games had much worse performance problems back than, but that was accepted as normal).
 
I don't understand why everyone is so upset about this specific incident when Epic Games intentionally designed Nanite to be used in conjunction with TAA in general ...

Upscaling by itself is a red herring to the main issue which is the requirement of temporal reuse of samples to get an acceptable image quality and that seems reasonable enough given the inherent aliased nature of Nanite's virtualized micropolgon geometry ...
 
I think graphics wise Remnant 2 is fine. The main complains are probably more about its artistic direction instead of technical wise. If you know where to look, Nanite is actually pretty impressive in this game. The textures are also quite good too. My main problem is the lack of dynamic GI (Lumen or ray tracing), which is probably also why some people find it lacking. But I understand that with RT or Lumen it would be even slower.

Its default setting is to use DLSS performance without DLSS 3 on my computer, but I think it has too many artifacts, so I changed it to DLSS quality with DLSS 3, which is also what GeForce Experience suggested. I haven't noticed any artifacts, except in the credit roll when there are many "Environmental Artist" scrolling at the same time, which seems to confuse DLSS 3.
 
Remnants 2 is odd because it hugely benefits from scaling. Just going from native 4K to 4K/DLSS Quality boosts performance by like 60% which I don’t think I’ve ever seen. I generally see it in the range of 20-30%.
A major part of this is probably due to the nature of Nanite and VSMs. When you increase the resolution of a classic game, you tend to just blow up the polygons. It adds some shading work but generally the resolution of shadow maps, the detail of geometry and so on do not increase proportionally. So it may say "4k" and you get (often overly) crisp albedo textures, but in reality most of the game is just... bigger polygons and even more undersampled/blurry shadows. Gamers are used to this by this point, but it is obviously not the goal.

On the contrary, both Nanite and VSM target polygon sizes and shadow sampling rates (respectively) that are proportional to the pixel sampling rate. i.e. if you quadruple the primary pixel count (1080->4k), you will often do the same to both the geometric detail (assuming the mesh detail is available in the source asset) and the shadow resolution, which obviously has a much greater impact on performance than classic resolution changes. But that's really the point - classic resolution changes are not some holy grail of correctness. In many ways you can think of them as their own kind of "upsampling"; you are increasing the evaluation of part of the visibility and shading function (BRDF, textures) but not other parts (shadows, GI, etc). In reality, we really do want these rates to be directly related so that the nature of the image doesn't change fundamentally between low and high resolutions. The "side effect" of this is that resolution is a big hammer now in terms of performance and quality for games that use these technologies, and that's a good thing. People will just need to adjust their expectations on that front.

Now of course you can argue that you personally like the blown up polygons and blurry shadows look and that's fine. The new systems can of course be configured to undersample these parts more heavily if that is the goal, but I don't expect it to be the norm. For most people, in most situations - and specific game art aside - better lighting and cleverness applied to a bunch of well distributed stochastically sampled pixels plus smart upsampling is a (much) better use of performance than brute force. This was established in the research literature decades ago now, even for offline rendering. I don't think anyone I know have has been particularly vague on this front... things like dynamic GI require an order of magnitude more hardware performance but can have a much greater impact on visual quality than simply brute forcing some more primary visibility, especially considering the better sampling patterns enabled by modern super-resolution/temporal AA (much like how MSAA can look better and be more stable than doubling the resolution with uniform sampling beyond a certain visual pixel density).

The other consideration for games with more limited resources that consumers may not see is that the actual game production with technology like Nanite and Lumen can be significantly streamlined, allowing smaller teams to produce more content. I haven't played through Remnant 2 but from the video it doesn't look like there's a whole lot of dynamic lighting going on, so it could probably perform better with classic baked lighting. That said, baked lighting and manual LODs add a large amount of overhead to content production and thus it's very possible that given a fixed team size and time frame they would not have been able to produce nearly as much content. Obviously this is a consideration that the end user doesn't really see directly, but it's a big part of the benefit of these more modern, automatic systems.
 
Last edited:
A major part of this is probably due to the nature of Nanite and VSMs. When you increase the resolution of a classic game, you tend to just blow up the polygons. It adds some shading work but generally the resolution of shadow maps, the detail of geometry and so on do not increase proportionally. So it may say "4k" and you get (often overly) crisp albedo textures, but in reality most of the game is just... bigger polygons and even more undersampled/blurry shadows. Gamers are of course used to this by this point, but it is obviously not the goal.

On the contrary, both Nanite and VSM target polygon sizes and shadow sampling rates (respectively) that are proportional to the pixel sampling rate. i.e. if you quadruple the primary pixel count (1080->4k), you will often do the same to both the geometric detail (assuming the mesh detail is available in the source asset) and the shadow resolution, which obviously has a much greater impact on performance than classic resolution changes. But that's really the point - classic resolution changes are not some holy grail of correctness. In many ways you can think of them as their own kind of "upsampling"; you are increasing the evaluation of part of the visibility and shading function (BRDF, textures) but not other parts (shadows, GI, etc). In reality, we really do want these rates to be directly related so that the nature of the image doesn't change fundamentally between low and high resolutions. The "side effect" of this is that resolution is a big hammer now in terms of performance and quality for games that use these technologies, and that's a good thing. People will just need to adjust their expectations on that front.

Now of course you can argue that you personally like the blown up polygons and blurry shadows look and that's fine. The new systems can of course be configured to undersample these parts more heavily if that is the goal, but I don't expect it to be the norm. For most people, in most situations - and specific game art aside - better lighting and cleverness applied to a bunch of well distributed stochastically sampled pixels plus smart upsampling is a (much) better use of performance than brute force. This was established in the research literature decades ago now, even for offline rendering. I don't think anyone I know have has been particularly vague on this front... things like dynamic GI require an order of magnitude more hardware performance but can have a much greater impact on visual quality than simply brute forcing some more primary visibility, especially considering the better sampling patterns enabled by modern super-resolution/temporal AA (much like how MSAA can look better and be more stable than doubling the resolution with uniform sampling beyond a certain visual pixel density).

The other consideration for games with more limited resources that consumers may not see is that the actual game production with technology like Nanite and Lumen can be significantly streamlined, allowing smaller teams to produce more content. I haven't played through Remnant 2 but from the video it doesn't look like there's a whole lot of dynamic lighting going on, so it could probably perform better with classic baked lighting. That said, baked lighting and manual LODs add a large amount of overhead to content production and thus it's very possible that given a fixed team size and time frame they would not have been able to produce nearly as much content. Obviously this is a consideration that the end user doesn't really see directly, but it's a big part of the benefit of these more modern, automatic systems.

That's really illuminating, thanks! I can see why you put so much effort into the new upscale now, and it also helps put the developers comments about Remnent 2 being designed for upscaling into a much better context.
 
Could that be exposed in a game's settings? Sampling Nanite at 1080p and drawing at 4K might benefit framerate a lot?
Yes, but it is somewhat content dependent. For instance Valley of the Ancients I believe targets ~2 pixel edge lengths for triangles with Nanite vs. the default of 1 (i.e. matched resolution). This works well for that content because it is scanned and thus fairly regularly sampled. While it has a lot of detail, it also has fairly monotone albedo textures that tend to hide a lot of issues regardless. If you push Nanite geometry detail too low relative to sampling rates, you will of course get popping again, albeit at a finer grained cluster level rather than full meshes. TSR actually takes this into account and biases the Nanite detail level further towards the *target* resolution regardless in the same way that texture LODs are biased with super-resolution as well, since you *are* sampling them at higher rates, you are just doing it over multiple frames. VSMs currently do not bias towards the upscaled resolution for two reasons: first, it would be a lot more expensive and usually people are lower resolution to gain performance. Second and perhaps more importantly, VSMs do not do jittered rasterization in light space because that would negate the ability to cache. Instead we do stochastic sampling of the shadow maps (via shadow map raytracing) and use TSR/TAA to integrate reasonable sample counts over time. Obviously in the case of quickly moving objects this can still generate some noise, but overall it works quite well.

I'm not entirely sure what you mean by "drawing at 4k"... do you mean evaluating material shading at 4k? Which part of material shading? Things are pretty complicated nowadays to the point that even the term "native 4k" doesn't mean a whole lot. Suffice it to say there are dials people can play with, but you should indeed expect most Nanite games to have a heavier up front cost, but (much) better scaling with more geometry and larger view ranges. And indeed rendering Nanite-detail-level geometry at 4k is probably not going to be a good use of performance; the systems are design to produce very similar results with much better performance via integration with TSR.
 
Theoretically speaking, these technologies are supposed to allow us to intelligently render graphics and better utilize the GPU. In actuality, it just encourages an extreme lack of optimization and the proliferation of technical incompetence.
Yes and yes. :D
 
That's really illuminating, thanks! I can see why you put so much effort into the new upscale now, and it also helps put the developers comments about Remnent 2 being designed for upscaling into a much better context.

Being designed for upscaling is fine and actually a good thing given potential benefits of spending hardware resources elsewhere. The kicker is whether the new tech + upscaling results in a better end result. Coverage of Remnant 2 so far indicates the result isn’t particularly impressive. I found the new UE5 Lords of the Fallen to be more visually impressive both technically and artistically.
 
Edit: Looks like today's patch increased performance a lot, even with the new detailed shadows setting enabled. Again, this is just one scene staring down a ramp from one of the videos referenced earlier. For the new patch I tried to line up as best as I could remember, and it may not be exact, but it should be very close. Nothing that would be off far enough to make these gains.

Edit: Also not the DLSS ultra performance results fluctuate around a lot so I'm eyeballing the fps, and the GPU utilization does drop well below 98 or 99%, so there's likely some CPU impact in those results.


Original Results (not re-tested)
settingslowmediumhighultra
native 1440p60505044
DLSS Quality97797470
DLSS Balanced107938879
DLSS Performance1171059889
DLSS Ultra Performance137127120108
New Patch
settingslowmediumhighultra
native 1440p80756862
DLSS Quality1191109791
DLSS Balanced12712010698
DLSS Performance133129117108
DLSS Ultra Performance140138133128
New Patch w/ detailed shadows enabled
settingslow (detailed shadows)medium (detailed shadows)high (detailed shadows)ultra (detailed shadows)
native 1440p76716557
DLSS Quality1101059887
DLSS Balanced11711310697
DLSS Performance120123117107
DLSS Ultra Performance127130122120
 
Last edited:
I'm not entirely sure what you mean by "drawing at 4k"...
Undersampling the geometry, and whatever else. In essence evaluate the scene at 1080p to get your triangles but draw them at 2160p. What you describe for the Valley demo is just that from the sounds of it, only '540p sampling' (1 pixel edge lengths) and '1080p rendering' (2 pixel edge lengths).

I'm curious what settings can be exposed at runtime for people to play around with. Is it heavily dependent on the way the content is created?
 
Remnants 2 is odd because it hugely benefits from scaling. Just going from native 4K to 4K/DLSS Quality boosts performance by like 60% which I don’t think I’ve ever seen. I generally see it in the range of 20-30%.
developers admitted themselve that they designed the game around upscaling technologies, and gamers are upset, as you can see in the reddit thread below, 'cos they blame the lack of performance -even on consoles- on that.

--General Performance--

We've heard from a few folks about the game's overall performance. We're definitely going to roll out performance updates after the game's launch. But for the sake of transparency, we designed the game with upscaling in mind (DLSS/FSR/XeSS). So, if you leave the Upscaling settings as they are (you can hit 'reset defaults' to get them back to normal), you should have the smoothest gameplay.

You're free to tweak other settings for better performance – changes to Shadow Settings will make the biggest difference besides Upscaling.

Still having trouble with the game's performance even after using our recommended settings? Just let us know. We're here to help sort out any issues you're facing.


 
Last edited:
I really don't understand how these studios can suddenly pull these gains out of their titles shortly after release when there's mass complains, gains that they couldn't manage to accomplish for the 2-3 years they've been making the game? I am always dubious that they just reduced certain quality levels of some of the more intensive effects/materials/stages of the game engine without telling anyone.
 
I really don't understand how these studios can suddenly pull these gains out of their titles shortly after release when there's mass complains, gains that they couldn't manage to accomplish for the 2-3 years they've been making the game? I am always dubious that they just reduced certain quality levels of some of the more intensive effects/materials/stages of the game engine without telling anyone.

Do they, though? Large gains like these aren't that common ime.
 
I really don't understand how these studios can suddenly pull these gains out of their titles shortly after release when there's mass complains, gains that they couldn't manage to accomplish for the 2-3 years they've been making the game? I am always dubious that they just reduced certain quality levels of some of the more intensive effects/materials/stages of the game engine without telling anyone.
With how everyone scrutinizes things these days, that’d be almost impossible.
 
Back
Top