Non-DLSS Image Reconstruction techniques? *spawn*

Mods: Consider creating a *spawn thread* for console related "Shader Based Image Reconstruction" techniques not using Nvidia's DLSS (tensor core) methodology.

Paging @TheAlSpark , if you should have the time to go through tagging posts to migrate to such a new discussion.

Pharma, thanks for the suggestion. It seems worth while.
 
True, if MS can provide a decent AI reconstruction process for the S that would be ideal. However, AFAIK there has been no indication that MS will do anything other than support DirectML.

My point was more that the S doesn't seem to be capable of running DLSS style model at all. Its simply too slow. The X on the other hand is fast enough, if only barely. That could lead to some really awkward market segmentation issues for Microsoft. The S is supposed to be a 1440p version of the X, but give the X the advantage of ML upscaling and the gap between them becomes much, much harder to bridge.

To put it into context, I just learned the S is only a 4 TFLOP GPU, significantly less than the 1660Ti which nvidia launched without Tensor cores on account of it not being fast enough for them to be of any use. A single frame upscale on the S should take around 15ms!

So either Microsoft would need a vastly more efficient model than DLSS to make it feasible, or far more likely, it'll never happen, which in turn casts doubt on it seeing the light of day on the X either.
 
My point was more that the S doesn't seem to be capable of running DLSS style model at all. Its simply too slow. The X on the other hand is fast enough, if only barely. That could lead to some really awkward market segmentation issues for Microsoft. The S is supposed to be a 1440p version of the X, but give the X the advantage of ML upscaling and the gap between them becomes much, much harder to bridge.

To put it into context, I just learned the S is only a 4 TFLOP GPU, significantly less than the 1660Ti which nvidia launched without Tensor cores on account of it not being fast enough for them to be of any use. A single frame upscale on the S should take around 15ms!

So either Microsoft would need a vastly more efficient model than DLSS to make it feasible, or far more likely, it'll never happen, which in turn casts doubt on it seeing the light of day on the X either.

DLSS started as a shader implementation and now uses tensor cores

MS can train the models using azure and then run them on the Model S.

Also again AMD has Fidelity FX CAS + upscaling there is also radeon image sharping seems to work really well for image quality and performance
- death stranding from DF showing off Fidelity FX Cas + sharping

if you go to the end of the video he talks about playground games using forza 3 using ML upscaling. So if you combine what AMD already has with ML while it may not be as good as DLSS 2.1 it may be more than passable for someone buying a $300 console.

Here DF did some math

Rtx 2060 has 103.2 INT-8 Tops and render time would be 2.5MS
Series X would be 49 INT-8 tops and render time would be 5MS with them assuming similar near linear scaling.

The question is how about inter 4 ? Will that be enough to do it and will the series s have enough to make it worth while

DLSS 2 is great but if i'm buying a $300 piece of hardware i wouldn't even mind Fidelity FX Cas+ sharping. If AMD adds in machine learning to it to help it out I could easily see it being a big hit on the model s and even the model x.

I am sure in Navi 2 there will be hardware assisted features
 
DLSS started as a shader implementation
DLSS started as ML solution actually, 1.0 was using tensor cores, the approach was different however.
DLSS "1.9" used in Control at launch was the only DLSS which wasn't using tensor cores - and it was basically TAAU.
DLSS 2.0 took this TAAU and added ML back into it which allowed them to clean up again lots of issues which TAA has.
 
if you go to the end of the video he talks about playground games using forza 3 using ML upscaling.

I'm not able to watch the videos at the moment as I'm on a limited connection, but are they talking about the Microsoft Direct Learning demonstration? If so that's using Nvidia's DLSS model and hardware.

Here DF did some math

Rtx 2060 has 103.2 INT-8 Tops and render time would be 2.5MS
Series X would be 49 INT-8 tops and render time would be 5MS with them assuming similar near linear scaling.

Yes this is what my own analysis is based on. if the X is 5ms and the S has 1/3rd the throughput then the S would be 15ms. Obviously way too slow for 60 fps where you only have 16.6ms for the entire frame. 30fps might make it worthwhile but ht enet benefit would be very small, if it exists at all.

The question is how about inter 4 ? Will that be enough to do it and will the series s have enough to make it worth while

INT4 is just double INT8 rate on both RDNA2 and Turing as far as I'm aware so the end result would be the same. I'd say Microsofts only route to a ML upscaling solution on the S is to produce a model that's significantly more efficient than DLSS, and I'm not sure how possible that is with their current hardware within the timescales of this generation.

DLSS 2 is great but if i'm buying a $300 piece of hardware i wouldn't even mind Fidelity FX Cas+ sharping.

As someone who thinks native 1440p with good TAA offers near perfect image quality I would have to agree. especially if playing on a TV rather than a monitor.
 
I'm not able to watch the videos at the moment as I'm on a limited connection, but are they talking about the Microsoft Direct Learning demonstration? If so that's using Nvidia's DLSS model and hardware.
its a mix of different upscaling methods



Yes this is what my own analysis is based on. if the X is 5ms and the S has 1/3rd the throughput then the S would be 15ms. Obviously way too slow for 60 fps where you only have 16.6ms for the entire frame. 30fps might make it worthwhile but ht enet benefit would be very small, if it exists at all.



INT4 is just double INT8 rate on both RDNA2 and Turing as far as I'm aware so the end result would be the same. I'd say Microsofts only route to a ML upscaling solution on the S is to produce a model that's significantly more efficient than DLSS, and I'm not sure how possible that is with their current hardware within the timescales of this generation.
Except if int 4 is double the rate of INT8 then if its usable for a DLSS type program for the X it only need 2.5ms and for the 3 it would be 7.5 ms. That would put it right back in the ball park of being worth while to use

As someone who thinks native 1440p with good TAA offers near perfect image quality I would have to agree. especially if playing on a TV rather than a monitor.
I mean on a $300 all in piece of hardware yes 1440p with Fidelity FX CAS + sharping or even at 1080p it be a great value imo. I would l rather spend the $200 more and $500 in total to get something native 4k . However for many that $200 is bank breaking.

Of course on a $800 video card that i'm adding to a thousands of other dollars worth of hardware i would surely want more.

I think AMD would have an answer to DLSS or at least an upgraded version of Fidelity FX CAS. I guess we have over a month to wait and find out lol
 
Except if int 4 is double the rate of INT8 then if its usable for a DLSS type program for the X it only need 2.5ms and for the 3 it would be 7.5 ms. That would put it right back in the ball park of being worth while to use

I don't think that one can just assume that an "INT4 DLSS" network would be 2x as fast as an INT8 one. The INT4 one might need extra steps to reach similar results, thus negating part of the gains.
 
I may be misremembering but I think @nAo used a logluv format to pack HDR info into a 32bpp RGBA format for a PS3 era game.
It wasn't really doing math in int4/8 though, but storing the results in buffer with int8/channel.

Int4/8 may be very restrictive, but perhaps there is ways to use them for things which really do not need a lot of precision. (Fixed point should give some additional precision.)
What was amazing in 386 or MMX era is amazing again.
 
Devs surely can find a use for int 4/8 in some graphics and/or compute work outside of AI no?
A generation is a long time. This is really infancy to see ML in games. Once it happens maybe more will pile in. But the challenge is getting things done in less than 1-2 milliseconds. That’s not particularly easy. You got dlss at 2ms. If you have AI or other things, how many actors on the screen etc need to run their AI. Like say driving games, more cars = more AI being run. It will eat up your budget very fast.
 
I really doubt we're going to be seeing much ML/DL for client-side game logic/AI, especially for stuff like driving games where the existing methods are plenty sufficient, not to mention being robust/deterministic such that QA isn't a total shit show. I'd expect the biggest impact this generation is going to come on the development side (tools to assist and accelerate the art pipeline).
 
Back
Top