Seems like a pretty normal performance delta. For some reason the 2080 Ti and 3070 do especially badly here and the 3080 is 70% faster than the 2080 Ti at 4K.
They use async compute to hide the cost of building the BVH, that's one of the reasons why it's so cheap. This capture is at native 4K on the 3080 (no DLSS)
From several frame captures I see the cost of RT is usually in the 1.5 - 2.5 ms range at native 4K which is pretty impressive.
CPU limitations.The gap closes to 12 % as resolution goes down to FHD.
Ultra Nightmare settings consume a lot of VRAM, the 3060 is a 12GB card.GeForce 3060 is already beating 8 GB cards?
The 3090 is 57% faster @4K than the 6900XT, @2K it is 50% faster too.Anyway, back to the topic: DOOM: Eternal mit Raytracing und DLSS im Test
It does absolutely nothing for average static image quality, just changes the amount of Textures cached in VRAM like @DavidGraham says, hence why it is called "Texture Pool Size" and not "Texture Resolution or Quality".. A texture on Ultra Nightmare looks the exact same as it does on Low. The lower you go even down to "Low" only increases the chances that rapid camera movement or perhaps an active camera teleport can perhaps lead to a lower res mip being shown for a few frames. That is it.Does the ultra nightmare vram heavy setting in Doom actually change the image in any way? Does it improve performance? Why is it there?
It does absolutely nothing for average static image quality, just changes the amount of Textures cached in VRAM like @DavidGraham says, hence why it is called "Texture Pool Size" and not "Texture Resolution or Quality".. A texture on Ultra Nightmare looks the exact same as it does on Low. The lower you go even down to "Low" only increases the chances that rapid camera movement or perhaps an active camera teleport can perhaps lead to a lower res mip being shown for a few frames. That is it.
They should've just made it separate from quality presets.Makes me wonder why this is a setting in the first place if its that useless. It's just going to confuse people.
Why not leave it at medium or high for all cards, or let the engine automatically choose based on GPU's VRAM, graphics settings and active background programs.
This is one thing that definately should improve for the next IDTech game. Their fixed memory allocation system also has some drawbacks like we have seen with that strange DLSS memory allocation bug.
That was before Moor's law ended, now we are in a new reality. Chips are going bigger again.Usually chips get smaller, not larger. And HW becomes cheaper, not more expensive
RDNA2 chips are expensive even though they are smaller than Turing. A 6900XT is a 1000$ and in most RT workloads it's no better than a Turing.Turing was the opposite of that.
Turing is definitely more capable RT wise than RDNA2. Period. Current workloads (gaming/professional) are proof enough of that.Finally it's simply not true AMD RT is 'not capable'. If you think so, you also think Turing is not capable, which i doubt.
But hey don't have to. Keep them small and achieve visual progress with better softwareThat was before Moor's law ended, now we are in a new reality. Chips are going bigger again.
Yep, even RDNA was expensive. Got the same TF at half the price by sticking at GCN. So AMD also did contribute to my somewhat exaggerated depressed view on a healthy PC platform.RDNA2 chips are expensive even though they are smaller than Turing. A 6900XT is a 1000$ and in most RT workloads it's no better than a Turing.
In RT games i see it very mostly ahead of Turing, even if RT performance in isolation is worse. So it's good enough in practice, and higher flexibility may pay off if DXR evolves quickly (which i doubt).Turing is definitely more capable RT wise than RDNA2. Period. Current workloads (gaming/professional) are proof enough of that.
Makes me wonder why this is a setting in the first place if its that useless. It's just going to confuse people.
Why not leave it at medium or high for all cards, or let the engine automatically choose based on GPU's VRAM, graphics settings and active background programs.
This is one thing that definately should improve for the next IDTech game. Their fixed memory allocation system also has some drawbacks like we have seen with that strange DLSS memory allocation bug.
RDNA2 chips are expensive even though they are smaller than Turing. A 6900XT is a 1000$ and in most RT workloads it's no better than a Turing.
Turing was more expensive yes, but considering it was the only future proof arch with DX12U support, It paid dividend for it's userbase, as opposed to the cheap dead end RDNA1 GPUs.
Turing is definitely more capable RT wise than RDNA2. Period. Current workloads (gaming/professional) are proof enough of that.
RDNA2 needs around 50% more transistors than Turing to deliver the same performance with heavy RT workload. From a technical standpoint AMD's implementation is worse than Turing's.
Yet another metric. Well, for that to have any meaning, you'd have to compare the full configs of each Navi21 and TU102. As it stands, 6800 uses 60 out of 80 execution units (and 75% of the ROPs, 100% memory configuration for completeness' sake), TU102 68 out of 72 (92% of memory and ROPs).RDNA2 needs around 50% more transistors than Turing to deliver around the same performance with heavy RT workload. From a technical standpoint AMD's implementation is worse than Turing's.