Nvidia shows signs in [2023]

  • Thread starter Deleted member 2197
  • Start date
Status
Not open for further replies.
Interesting. Looking at the example is there any reason why this seems to make the gi look so much more vibrant? Could the accumulation errors mentioned really make that much of a difference? The 4th slice in that first image almost looks like it's on a different detail setting like more ray bounces or something.

I wonder if it will be enough of an image quality improvement that you could say drop from quality to balanced and get similar image quality to the older dlss but get a bit of a performance bump instead.
 
Interesting. Looking at the example is there any reason why this seems to make the gi look so much more vibrant? Could the accumulation errors mentioned really make that much of a difference? The 4th slice in that first image almost looks like it's on a different detail setting like more ray bounces or something.

I wonder if it will be enough of an image quality improvement that you could say drop from quality to balanced and get similar image quality to the older dlss but get a bit of a performance bump instead.
Yea I dont trust that image comparison one bit. DLSS2 does not totally change the visuals like is being demonstrated here, and so I doubt DLSS3 will be that different, either. One of the other images shows a more likely scenario, where most everything is the same, but RT reflections are more detailed in some areas due to better temporal analysis.

Also seen some arguing elsewhere on whether this will come to older GPU's - I think we can safely say it wont no matter if 20/30 series can technically handle it, due to it being called DLSS 3.5. Imagine how confusing it would be if DLSS3 wasn't available for 20/30 series parts, but somehow DLSS 3.5 was. Yet again, Nvidia's weird bundling of all these quite different techniques under the same 'DLSS' naming makes no real sense aside from their ability to keep locking new features behind newer parts.
 
Interesting. Looking at the example is there any reason why this seems to make the gi look so much more vibrant? Could the accumulation errors mentioned really make that much of a difference? The 4th slice in that first image almost looks like it's on a different detail setting like more ray bounces or something.
The 4th slice there is missing the light sources which are being reflected in the first three. Which is the main reason for the visual difference there.

I wonder if it will be enough of an image quality improvement that you could say drop from quality to balanced and get similar image quality to the older dlss but get a bit of a performance bump instead.
It's ray tracing improvement only, it won't fix the other issues you get from going to a lower rendering resolution.

I think we can safely say it wont
We can safely say it will.

due to it being called DLSS 3.5
DLSS 3 works on all RTX GPUs.
 
Reflex is and was out there before. Just like the DLSS 2.x upscaling component.

Marketing them under new umbrella term doesn't make them new.
Who cares what makes what new? The fact is DLSS3 works on all RTX GPUs. The only thing which doesn't is the frame generation component. Thus saying that 3.5 somehow won't work on anything but Ada because 3.0 somehow doesn't is just wrong.
 
Ray Reconstruction looks great and look forward to reviews! Wonder if we will see a "different" type of innovation in competitor upscaling technology. :ROFLMAO:
 
Last edited:
Looks quite promising. Cyberpunk 2077 Overdrive definitely does suffer from denoising related image quality issues currently. Hopefully this is as good as as the marketing makes it seem.

Intel has actually had something similar in the works too:
 
Can you remember? Just yesterday somebody asked what happened to A.I denoising? Ha, good timing.
Yep according to NVIDIA, games with DLSS3.5 will perform both upscaling and denoising on the tensor cores for all RTX cards. It will also increase performance.

"The DLSS 3.5 framework will also include the existing Super Resolution, DLAA, and Frame Generation technologies which are supported across their own specific GPU generation. These technologies will harness the NVIDIA Tensor Core technology for unparalleled performance and jaw-dropping visual quality"

"NVIDIA explains that one aspect leading to the increased performance is that they can replace multiple denoisers with a single AI model and the final performance will depend from game to game"

 
Last edited:
Nvidia is so far ahead of everyone else, it's not even funny. They're basically running their own race at this point. I'm not a fan of DLSS at all but this is the one application where for me, it actually makes a lot of sense and I'd actually use it. Ray reconstruction is a great use of this technology. I'm very impressed.
 
Yea I dont trust that image comparison one bit. DLSS2 does not totally change the visuals like is being demonstrated here, and so I doubt DLSS3 will be that different, either. One of the other images shows a more likely scenario, where most everything is the same, but RT reflections are more detailed in some areas due to better temporal analysis.

Also seen some arguing elsewhere on whether this will come to older GPU's - I think we can safely say it wont no matter if 20/30 series can technically handle it, due to it being called DLSS 3.5. Imagine how confusing it would be if DLSS3 wasn't available for 20/30 series parts, but somehow DLSS 3.5 was. Yet again, Nvidia's weird bundling of all these quite different techniques under the same 'DLSS' naming makes no real sense aside from their ability to keep locking new features behind newer parts.

DLSS 3 is available on older gpus, just not the frame generation part. Same here.

DLSS 2: super resolution (all rtx gpus)
DLSS 3: super resolution (all rtx gpus) + Frame Generation (40 series only)
DLSS 3.5: super resolution (all rtx gpus) + Ray Reconstruction (all rtx gpus) + Frame Generation (40 series only)
 
v3.5 DLAA will be interesting. Seems like it will be much better than native res RT on both ends and probably faster as well?
Yeah, it will be interesting to see how their AI RT denoiser+upscaler will work against games own denoising solutions which are prevalent right now. They say that CP2077 gets a minor perf boost from this thanks to switching from using multitude of denoisers to one common RR solution (which I presume runs better on RTX GPUs as well).

DLAA + RR apparently won't be possible.
Doesn't make much sense. DLAA is also "reconstruction" as it also uses several previous frames to "reconstruct" a final image.

I'm thinking only some fixed upscale ratios will be possible.
Why would it? It should work with any input resolution really.
 
Status
Not open for further replies.
Back
Top