Digital Foundry Article Technical Discussion [2020]

Status
Not open for further replies.
I'm hoping AMD comes up with a viable competitor for DLSS 2.0, or things could get very ugly for them in the GPU space if DLSS 2.0 starts being adopted widely. DLSS 3.0 is probably only a year or two away.

They will, the next after the first rdna2 gpu’s most likely. Consoles missed that boat.
 
I’d thought about this and it looks like Series X is only half of an rtx2060’s tensor power. Not sure how likely a dlss2.0 type solution could be.
If you are targeting 30 fps it should be fine, right? Also, we don't know what AMD has that equates to nVidia's denoiser. Perhaps there is some hardware solution that can be used full scene to scale, denoise, and sharpen that would look comparable.
 
if 3.0 doesn’t arrive with ampere, I’d be surprised.
DLSS is purely software. Won’t need to bind it to Ampere it will run on all of them; just slower with less tensorcores. Unless you meant they try to launch DLSS3.0 at the same time as launching ampere for extra selling power.
 
DLSS is purely software. Won’t need to bind it to Ampere it will run on all of them; just slower with less tensorcores. Unless you meant they try to launch DLSS3.0 at the same time as launching ampere for extra selling power.

More raw power so more perf gain from using Quality present. Who knows what else could be added?
 
So what improvements would 3.0 have? Just faster so able to aim higher?

Faster, better quality, maybe more control over scaling so you can scale to and from custom resolutions. Maybe add faked HDR like Series X is going to do with their backwards compatibility (which I expect might be done with DirectML). Lots of ways it can go.
 
It used to be done in shaders, so I do wonder if it actually needs the power of the tensors to do DLSS2.0.
The model can still run on the shaders on Series X, but they will consume a lot of TF power.

On RTX GPUs, tensor cores don't run completely in parallel with CUDA cores to render DLSS, their work starts almost after the CUDA cores are done with their thing, the difference then comes in the compute power the tensor cores are capable of delivering.

A Titan RTX has close to 260 TOPs of INT8, or 130 TFLOPs of FP16 from it's Tensor cores alone, if -theoretically- every CUDA core was used to render ML algorithms they will amount to only 32 TFLOPs of FP16 (with rapid packed math) or 64 TOPs of INT8. Which is a fraction of the tensor array capability.

So, the CUDA cores alone might render a full 4K DLSS frame in say: 16ms
The combined CUDA cores + Tensor cores will reduce that time to, say: 8ms
 
F
Faster, better quality, maybe more control over scaling so you can scale to and from custom resolutions. Maybe add faked HDR like Series X is going to do with their backwards compatibility (which I expect might be done with DirectML). Lots of ways it can go.
First two are possible, the scaling one is extremely hard to achieve. Neural networks like the one used here are designed and optimized around fixed inputs and fixed outputs. A network that takes in any screen image to blow up to any screen image size is, really tough to get good performance out of especially in a realtime setting. You'd have to train a new model for every single input resolution and output one and that would be costly.

The better quality is very feasible (the more time you give ML the better the results will be) but result in a bit (or a lot depending how far you want to go) more processing time, which can be offset by additional tensor cores.
But then you'd have this divide between shipping DLSS2.0 models and DLSS3.0 models since it woudl be obious that 2xxx series won't be able to handle DLSS 3.0 models.
 
Last edited:
Very pretty, but still a long way to go. Very blobby lighting with the denoising. Temporal artefacts up the wazoo (on the later bounces I think). Low res on the shadows sees poor grounding of objects on the corners of blocks, making them look like they're floating. And a low framerate to boot despite upscaling. On the simplest possible geometry for RTRT, I think this shows there's a whole lot of cheats and hacks needing to be developed for future games to push for this lighting quality in more complicated games.
 
Very pretty, but still a long way to go. Very blobby lighting with the denoising. Temporal artefacts up the wazoo (on the later bounces I think). Low res on the shadows sees poor grounding of objects on the corners of blocks, making them look like they're floating. And a low framerate to boot despite upscaling. On the simplest possible geometry for RTRT, I think this shows there's a whole lot of cheats and hacks needing to be developed for future games to push for this lighting quality in more complicated games.
Indeed, but this is a path tracer, it's significantly more complex than what they intended to do with the hardware which was hybrid ray tracing. Ray tracing minecraft would run faster than this ?, given the simple geometry, I don't think path tracing cares about how many triangles there are, but light sources kills performance dramatically IIRC. And ray tracing performance is killed by the number of triangles there are. Something like that. Path tracing can do things some things that Ray Tracers cannot, IIRC, ie certain types of light phenomena.

I dunno, it's tough that's all I know, and I didn't expect a real time path tracer in game. I think given time, as the hardware market matures for this, Minecraft will continue to evolve it's path tracer for better performance.
 
Last edited:
Very pretty, but still a long way to go. Very blobby lighting with the denoising. Temporal artefacts up the wazoo (on the later bounces I think). Low res on the shadows sees poor grounding of objects on the corners of blocks, making them look like they're floating. And a low framerate to boot despite upscaling. On the simplest possible geometry for RTRT, I think this shows there's a whole lot of cheats and hacks needing to be developed for future games to push for this lighting quality in more complicated games.

Ye, its a beta :)
 
Wonder what DXR is going to be like, and if Nvidia would've been better off just helping develop that version?
Guess nothing lost doing this though.
 
Wonder what DXR is going to be like, and if Nvidia would've been better off just helping develop that version?
Guess nothing lost doing this though.
what do you mean? its the same thing - this is made using DXR.
 
Wonder what DXR is going to be like, and if Nvidia would've been better off just helping develop that version?
Guess nothing lost doing this though.

It's just faster using the RTX cores. Like DLSS tech (or comparable tech then) can be used on a RDNA2 gpu, just slower without the tensor cores.
 

Impressive!
nice video. Only issue is that watching most videos on my monitor, the black crush is overwhelming, 'cos I never disable HDR -thus I dont have to fiddle with it, setting it on or off-, the tech is incredible. Crysis developers mentioned API-agnostic raytracing support, but I dont think GPUs like mine -GTX 1080- could ever run a game with raytracing on.
 
what do you mean? its the same thing - this is made using DXR.
Is it the same?
I was under the impression that the version MS showed (DXR) during xsx reveal is different than the RTX one.
Could be wrong but I'm sure DF said they was different, even though Nvidia did help out?
3:34 in.

Microsoft demonstrated how fully featured the console's RT features are by rolling out a very early Xbox Series X Minecraft DXR tech demo, which is based on the Minecraft RTX code we saw back at Gamescom last year and looks very similar, despite running on a very different GPU. This suggests an irony of sorts: base Nvidia code adapted and running on AMD-sourced ray tracing hardware within Series X

So as i said, the RTX version/branch isn't wasted effort as its all learning, but DXR is a fork and my point was I'm surprised they didn't just put all the effort into the DXR one now.
I suspect the RTX will not run on RDNA2 RT.
 
Last edited:
Status
Not open for further replies.
Back
Top