Nvidia Blackwell Architecture Speculation

  • Thread starter Deleted member 2197
  • Start date
I'm wondering if Reflex 2 requires you to render a frame that's bigger than your viewport. Any of these warping techniques I've seen before you basically render a bigger frame so when you warp you can bring in part of the image that was off screen.
 
In a sense it’s what people should expect right now - a new gen with a modest but respectable raw uplift at similar (or better) prices and a focus on better features/AI. In that sense it has the makings of a success, especially in the mid range.
Yup. If a transistor in the new process node costs the same as a transistor in the old node then perf/$ increase from node shrinks is gone. What we'll see for raw flops is a tepid uplift from increased clock speeds and slight cost reduction as yields improve and the node gets commoditized. The big face-melting gains are going to come from either fixed-function blocks (RT) and/or major algorithmic advances (AI).
 
I'm wondering if Reflex 2 requires you to render a frame that's bigger than your viewport. Any of these warping techniques I've seen before you basically render a bigger frame so when you warp you can bring in part of the image that was off screen.
Yeah, I’m not sure how else they could do it without major artifacting or other oddities. They really focused on its use in esports where the frame rates are already pretty high and thus the pixel movement in any direction per frame update should be relatively small (and I don’t think many people play esports at 4k but I don’t pay much attention to it).
 
Yeah, I’m not sure how else they could do it without major artifacting or other oddities. They really focused on its use in esports where the frame rates are already pretty high and thus the pixel movement in any direction per frame update should be relatively small (and I don’t think many people play esports at 4k but I don’t pay much attention to it).

Yah, this will definitely be a technology that works better at higher frame rates. I'd still like to have seen a faster 180 degree flick, or even a 90 degree one to see how it handled it.
 
Yeah, I’m not sure how else they could do it without major artifacting or other oddities. They really focused on its use in esports where the frame rates are already pretty high and thus the pixel movement in any direction per frame update should be relatively small (and I don’t think many people play esports at 4k but I don’t pay much attention to it).
In the ces thread there is a video on reflex 2 and the example they show they are saying they are generating those missing part of the frame with AI using rendering buffers.
 
Yah, this will definitely be a technology that works better at higher frame rates. I'd still like to have seen a faster 180 degree flick, or even a 90 degree one to see how it handled it.
I’m sure DF will do a deep dive (a lot of new features to explore here). I’d guess there is a limit to how far any frame can “warp.”
 
I think we are starting to witness the results of NVIDIA injecting loads of cash into gaming, the amount of new tech in this launch is too much, I am having a difficulty tracking them all down. They are increasing spending and the results are more path traced and heavily ray traced titles along with neural rendering for several elements.

Funny some people thought NVIDIA doesn't care about gaming anymore.
 
Last edited:
In the ces thread there is a video on reflex 2 and the example they show they are saying they are generating those missing part of the frame with AI using rendering buffers.
Aha! I knew there was a reason I became a lawyer and not a software engineer! Sounds clever. So even if there is artifacting it will presumably be limited to the edge of frames, so you probably wouldn’t notice it too much in actual gameplay.
 
That’s a lot of info dropped all at once. Lots of stuff for DF to chew on.

This is particularly nice:

For many games that haven’t updated yet to the latest DLSS models and features, Nvidia app will enable support through a new DLSS Override feature. Alongside the launch of our GeForce RTX 50 Series GPUs, after installation of a new GeForce Game Ready Driver and the latest NVIDIA app update, the following DLSS override options will be available in the Graphics > Program Settings screen, under “Driver Settings” for each supported title.

  • DLSS Override for Frame Generation - Enables Multi Frame Generation for GeForce RTX 50 Series users when Frame Generation is ON in-game.
  • DLSS Override for Model Presets - Enables the latest Frame Generation model for GeForce RTX 50 Series and GeForce RTX 40 Series users, and the transformer model for Super Resolution and Ray Reconstruction for all GeForce RTX users, when DLSS is ON in-game.
  • DLSS Override for Super Resolution - Sets the internal rendering resolution for DLSS Super Resolution, enabling DLAA or Ultra Performance mode when Super Resolution is ON in-game.
 
Aha! I knew there was a reason I became a lawyer and not a software engineer! Sounds clever. So even if there is artifacting it will presumably be limited to the edge of frames, so you probably wouldn’t notice it too much in actual gameplay.
seems to be no very obvious artifacting, they show an on and off. I assume they used AI over rendering a bigger frame so they don't run into occlusion artifacting which they show around the gun as it moves.

time stamped it to what you want to see.

 
I think we are starting to witness the results of NVIDIA injecting loads of cash into gaming, the amount of new tech in this launch is too much, I am having a difficulty tracking them all down. They are increasing spending and the results are more path traced and heavily ray traced titles along with neural rendering for several elements.

Funny some people though NVIDIA doesn't care about gaming anymore.
My thoughts exactly - we all knew NVIDIA would aim to stay ahead on software features, but this is remarkable. Between that and the prices Blackwell looks like a worst case scenario for AMD and Intel.
 
In the ces thread there is a video on reflex 2 and the example they show they are saying they are generating those missing part of the frame with AI using rendering buffers.

I still don't understand how that's going to work if you're disoccluding something that you've never seen ... Like you turn a corner for the first time, how is it going to draw in what's there?
 
seems to be no very obvious artifacting, they show an on and off. I assume they used AI over rendering a bigger frame so they don't run into occlusion artifacting which they show around the gun as it moves.

time stamped it to what you want to see.

Yep - reality is that even to the extent there are artifacts, they would occupy a handful of pixels-width at most and outside your primary field of view. And if they can keep things relatively clean even at low frame rates it could be a major boon for lower-mid range cards since it would help ameliorate one of the major downsides to low FPS gaming.
 
I still don't understand how that's going to work if you're disoccluding something that you've never seen ... Like you turn a corner for the first time, how is it going to draw in what's there?
That's above my pay grade lol ;) I'm guessing the AI does whatever it would do in a frame generation aspect in the same scenario, especially if they could have 3 future frames rendered. Thinking about it in an fps game with the reticle centre screen your always going to have some pixels to work with right? even peaking you never have your crosshair jammed right against the side of a viewport with nothing rendered left/right of it?
 
I just can't get over the fact that RTX 5090 actually turned out to be a 2 slot design. I did not believe that for a second, unless it had an AIO liquid cooler... Have to give credit to Kopite7kimi. He said it back in May. I'm very curious to see how the cooler performs, what is the actual power consumption and how much performance 3rd parties will get out it?
 
Back
Top