Let's discuss the ways the future console architectures could potentially improve in efficiency

Frame reprojection, and also interpolation or extrapolation or generation or whatever people are calling it, are those strange techs that haven't progressed very far. With frame generation now in DLSS3, there'll be renewed interest and these should be a big boon. However, they are a software solution and aren't next-gen tech requiring specific architecture, and can be applied this gen in theory.
 
Frame reprojection, and also interpolation or extrapolation or generation or whatever people are calling it, are those strange techs that haven't progressed very far. With frame generation now in DLSS3, there'll be renewed interest and these should be a big boon. However, they are a software solution and aren't next-gen tech requiring specific architecture, and can be applied this gen in theory.
Since they are software solutions, whats keeping them from being implemented?
 
And didn't really work any where near as well as DLSS3 does.
I never said it did. However, you don't need DLSS3 to get frame interpolation. There are other options (see TVs motion upscaling) . It just wasn't a route adopted and developed in a big way for games, unlike reconstruction AA techniques.

As I understand it the OFA in Ada are accelerators that improve efficiency, but the techniques themselves are ML software based and executable on appropriate general-compute hardware. In terms of next-gen architecture, more emphasis on ML is the only real influence and that's for many uses.

AMD has its own motion upsampling tech.

TVs doing it. 30 -> 60 FPS works alright.
That's perhaps part of it, although it'd be better handled in game, discussion here.
 
Last edited:
I feel that the next big jump in efficiency will come with moving away from screen space shading/rendering to object space.

That will allow for full decoupling of shading and rendering, so shade resolution can vary in space, depth, and time (and can be done asynchronously with render).

Since vast majority of geometry seen in a scene has very little variation in shading from frame to frame, those shades can be reused several times, saving an insane amount of resources.
 
I feel that the next big jump in efficiency will come with moving away from screen space shading/rendering to object space.

That will allow for full decoupling of shading and rendering, so shade resolution can vary in space, depth, and time (and can be done asynchronously with render).

Since vast majority of geometry seen in a scene has very little variation in shading from frame to frame, those shades can be reused several times, saving an insane amount of resources.
Object/UV space shading is a super cool technique, but I'm not sure it actually saves that much. For one, you either need to be able to render peak load of every object on screen updating (when a light moves) or you need artifacts with objects updating across multiple frames.
 
Object/UV space shading is a super cool technique, but I'm not sure it actually saves that much. For one, you either need to be able to render peak load of every object on screen updating (when a light moves) or you need artifacts with objects updating across multiple frames.

Depends on scene complexity and lighting load, etc. Initial setup for simpler setups is more expensive, but as scene and lighting complexity increases, object space (that increases more or less linearly) quickly outpaces screen space (that increases closer to exponentially).

It's especially meaningful when you start thinking about ray/path tracing.
 
I feel that the next big jump in efficiency will come with moving away from screen space shading/rendering to object space.

That will allow for full decoupling of shading and rendering, so shade resolution can vary in space, depth, and time (and can be done asynchronously with render).

Since vast majority of geometry seen in a scene has very little variation in shading from frame to frame, those shades can be reused several times, saving an insane amount of resources.
Is there a hardware specific feature to enable/accelerate this?
 
Is there a hardware specific feature to enable/accelerate this?

You'll want to accelerate visibility testing, either triangle indexing or some other way of issuing parametrized shading on geometry rather than screen space pixels, efficient ways to store and retrieve mapped shade atlases, etc. Turing already added some support for TSS, and IIRC VRS support also helps with implementing this.

These three papers (1, 2, and 3) are really good reads into the topic.

A few notable advantage of this approach is that it makes really cheap to do proper stochastic effects - defocus blur and motion blur - and multi-view rendering, like VR. For VR, 2 viewports will have almost the same cost as a single viewport.
 
Back
Top