Nvidia DLSS 1 and 2 antialiasing discussion *spawn*

Prepare yourselves for the noobness of this question
lets say I have a 1080p monitor and I want to game at 1080p
in game do I select a resolution that I want my performance to equivalent to eg: 480p and enable dlss
and the game renders at 480p with 480p framerates but dlss upscales it to 1080p
or do I select 1080p and enable dlss and everything is taken care of automatically ?
 
In the presentation wrt dlss 2.0 jensen said the driver is trained using very high res ideal frames from the game on their super computer.
I thought this training no longer had to be done per game now ?
 
DLSS update incoming, DRS! :love:
  • New ultra performance mode for 8K gaming. Deliver 8K gaming on GeForce RTX 3090 with DLSS.
  • Improved VR support. Maintaining the 2880×1600 resolution of top-end VR head mounted displays while delivering the recommended 90 FPS has been made easier with DLSS.
  • Dynamic resolution support. The input buffer can change dimensions from frame to frame while the output size remains fixed. If the rendering engine supports dynamic resolution, DLSS can be used to perform the required upscale to the display resolution.

https://news.developer.nvidia.com/new-features-to-dlss-coming-to-nvidia-rtx-unreal-engine-4-branch/
 
DLSS update incoming, DRS! :love:
  • New ultra performance mode for 8K gaming. Deliver 8K gaming on GeForce RTX 3090 with DLSS.
  • Improved VR support. Maintaining the 2880×1600 resolution of top-end VR head mounted displays while delivering the recommended 90 FPS has been made easier with DLSS.
  • Dynamic resolution support. The input buffer can change dimensions from frame to frame while the output size remains fixed. If the rendering engine supports dynamic resolution, DLSS can be used to perform the required upscale to the display resolution.

https://news.developer.nvidia.com/new-features-to-dlss-coming-to-nvidia-rtx-unreal-engine-4-branch/

The vr support is pretty nice. Would really help out on the index
 
Ultra performance mode is apparently 9x scaling, so 1440p -> 4320p. Curious to see how it looks.


I mean its great for the few people with 8k tvs/monitors. I'm more interested in the vr features for DLSS . Hoping to hear what titles support it.
My index is 1440x1600 and supports a 120hz and 144hz. It be great if i can play around and find a resolution that lets me hit 1440x1600 dLSS @ 144hz . Be even better if i can just run 1440x1600@144hz and then use DLSS to a higher resolution like 2880x3200 @144hz and down sample that bakc to the 1440x1600 for the best image quality possible.
 
That sounds like it has to be trained per game otherwise why would they have to train it on *ALL* titles ?
if it was generic they would only have to train it on a few titles ?
Just sounds like their continuing to improve their models with a variety of 16k ultra high quality images.

Just because no longer need to be trained on a game by game basis, doesn't mean stop training to improve it.
Basically dlss will continue to get better and improve final results.
 
Fortnite joins the list of games who have flipped the RTX switch to on. The game will soon support both ray tracing and DLSS.
Nice finally raytracing is arriving though still years off from being ubiquitous
As I assume nvidia used very high hardware for this video, but even with this top hardware the reflections are quite low quality.
And fortnite aint close to a demanding game graphically speaking.
Still I'ld still take low quality over approximations that are used today
 
Could someone answer my noob question plz
In most circumstances you simply leave the game resolution at your native, DLSS will handle telling the game to generate frames at lower resolutions and do the AI reconstruction. There are some latest titles that let you adjust what the internal resolution is I believe but you don't need to adjust.
 
That sounds like it has to be trained per game otherwise why would they have to train it on *ALL* titles ?
if it was generic they would only have to train it on a few titles ?
Check the video below, I've started it at the relevant part but would suggest looking at the entire video (pretty short). You can never have enough "few titles" for generic training. :runaway: The greater the generic sample size the more accurate your results will be.
 
Seems like DLSS cost in frametime will be significantly lowered because of the new concurrency stuff. This should be good knows for high framerate situations where the relative cost of DLSS became more significant.

I'm pretty clueless about all things rendering, but I'm surprised DLSS processing can be started before the frame is fully rendered.

NVIDIA-GeForce-RTX-30-Tech-Session-00037_79466A1F681E49E28894E588A0F52994.jpg
 
Seems like DLSS cost in frametime will be significantly lowered because of the new concurrency stuff. This should be good knows for high framerate situations where the relative cost of DLSS became more significant.

I'm pretty clueless about all things rendering, but I'm surprised DLSS processing can be started before the frame is fully rendered.

NVIDIA-GeForce-RTX-30-Tech-Session-00037_79466A1F681E49E28894E588A0F52994.jpg
Perhaps it overlaps with the RT work of the next frame?
 
Back
Top