Digital Foundry Article Technical Discussion [2022]

Status
Not open for further replies.
RTX 5000 series is going to be very interesting. I suspect Nvidia will introduce a brand new architecture again like they did with Turing, based on MCM.

If you have Ampere or Turing GPUs, the wait for RTX 5000 series migh be worth it.
 
RTX 5000 series is going to be very interesting. I suspect Nvidia will introduce a brand new architecture again like they did with Turing, based on MCM.

If you have Ampere or Turing GPUs, the wait for RTX 5000 series migh be worth it.

The 3090 for 4090 jump will be bigger than even 980ti to 1080ti and that's been considered by far the biggest generational leap.

You can always wait and argue about the price etc but in terms of performance, the 4090 will absolutely deliver. Wait until tomorrow, bar graphs will look silly when you visually see the difference.
 
John's answer to Q5 is wrong, but only because he stated it as an absolute.
Rich pointed out one example, but I have another.
The game Risen, which is BC on Xbox consoles, is unplayable on anything faster than an Xbox One/S. The camera speed is tied to the framerate, and the slightest of movements will cause your view to spin around at very high speeds on Xbox One X and up. The game provides no ability to adjust this sensitivity. Base Xbox One runs the game slow enough that the camera (and thus aiming for combat) is manageable.
I haven't played every BC game, but it's possible there is another game like this.
I suppose you could also say that it's necessary to own an Xbox 360 to play the much larger library of OG Xbox games that are BC for it, but not later Xbox consoles.
 
We can hope that emulation progress would be atleast as impressive as RT tech would be by that time.
Well in regards to X64 to ARM, the Rosetta 2 thingy on the Mac really really impresses, when it comes to emulating X64 stuff and getting it to run on ARM. Most likely the M1 carries a lot of the success, but the solution is really good.
 
😮 These GPUs are size of a Series S.


rtx-4090-series-s-1024x640.jpg
 
Last edited:
Actually Intel's RT solution will always differentiate between XMX and DP4a from a performance and quality perspective, so in a sense can't see Intel's solution being better.
What does RT solution have to do with XMX and DP4a? Those are used by XeSS, not RT?
 
Nope. Don't slander the Series S. The GPU isn't that small. It's larger.

LOL. I mean if this the future where the cost of silicon is so high gpu manufacturers are going high freq silicon paired with massive cooling. I doubt newly released consoles are ever going to come close to modern discrete cards again. The cost of shrinking isn’t getting cheaper while the transistion to smaller silicon is taking longer.
 
LOL. I mean if this the future where the cost of silicon is so high gpu manufacturers are going high freq silicon paired with massive cooling. I doubt newly released consoles are ever going to come close to modern discrete cards again. The cost of shrinking isn’t getting cheaper while the transistion to smaller silicon is taking longer.
Chiplets. If not chiplets. Completely different architectures. If not… I dunno. Cloud ?
 
What does RT solution have to do with XMX and DP4a? Those are used by XeSS, not RT?
Intel and Nvidia RT solutions are the same so it would be a stretch to say one is better than the other. The poster might be thinking about Intel's upscaling solution applied in an RT gaming situation in which case XeSS performance and visual results would be different whether an Intel (XMX) or competitor GPU (DP4a) is used.
 
Intel and Nvidia RT solutions are the same so it would be a stretch to say one is better than the other. The poster might be thinking about Intel's upscaling solution applied in an RT gaming situation in which case XeSS performance and visual results would be different whether an Intel (XMX) or competitor GPU (DP4a) is used.
now that you mention it, I hope AMD gets RT right this time and can compete. I heard that the new AMD GPUs are going to be made of chiplets? - @iroboto used that word and I recall reading something along those lines about the 7000 series from AMD-.

iirc, that architecture made of chiplets is going to allow the 7000 series to be revolutionary.
 
Extremely similar in my experience - Just that it is an in the moment reading and not an average over the course of time

That's good. So people using frame generation should be able to use the nvidia performance overlay to help adjust their settings to improve latency.
 
Great video. Excellent explanations and examples.

The only added info I'd like would be showing what setting (usually higher resolution) would yield the equivalent latency without DLSS3. It seems (at first blush) best to account for DLSS3's added latency as a tradeoff for enabling added visual features, as with the video's final Cyberpunk RT off vs RT on comparison.

I'm kind of curious how an outlier game like Industria (~40fps at 1080p on a 3080), which ran so poorly on the previous gen with RT, fares on this new gen (GF 40 series, Intel's new cards, the next Radeons). Do naive or unoptimized RT implementations scale any better with newer RT hardware?
 
Status
Not open for further replies.
Back
Top