Baseless Next Generation Rumors with no Technical Merits [post E3 2019, pre GDC 2020] [XBSX, PS5]

Status
Not open for further replies.
It sounds like the impossible scenario when people were doubting 8GBs of GDDR6 on PS4. I know it is far fetched but I wish we get surprised again in a positive way :p
 
In the grand scheme things it doesn't matter the difference in FLOPS is because most 3rd party developers will just target the lowest common denominator & so they will both be using the same FLOPS, no?

Unless that difference is 1.9

Tommy McClain
 
In the grand scheme things it doesn't matter the difference in FLOPS is because most 3rd party developers will just target the lowest common denominator & so they will both be using the same FLOPS, no?

Unless that difference is 1.9

Tommy McClain

It's okay man. You did your best.
 
Sony makes some of the best ASIC upscalers, with the recent model using AI databases (don't they all now?). They could leverage that asic work to avoid wasting gpu cycles, but i don't know if it's really processing intensive. The hard part is building the deep learning data, during rendering it's just applying the inference model. There is no actual deep learning involved at runtime.
 
I find myself having difficulty discerning the difference between sarcastic, hyperbolic posts meant in jest and those which are meant seriously.
 
Sony makes some of the best ASIC upscalers, with the recent model using AI databases (don't they all now?). They could leverage that asic work to avoid wasting gpu cycles, but i don't know if it's really processing intensive. The hard part is building the deep learning data, during rendering it's just applying the inference model. There is no actual deep learning involved at runtime.

Are they low-latency, though? That, to me, is the tricky part.
 
Sony makes some of the best ASIC upscalers, with the recent model using AI databases (don't they all now?). They could leverage that asic work to avoid wasting gpu cycles, but i don't know if it's really processing intensive. The hard part is building the deep learning data, during rendering it's just applying the inference model. There is no actual deep learning involved at runtime.

According to Nvidia having it run on dedicated hardware allows more flexibility:

https://www.techspot.com/article/1992-nvidia-dlss-2020/

"This first batch of results playing Control with the shader version of DLSS are impressive. This begs the question, why did Nvidia feel the need to go back to an AI model running on tensor cores for the latest version of DLSS? Couldn’t they just keep working on the shader version and open it up to everyone, such as GTX 16 series owners? We asked Nvidia the question, and the answer was pretty straightforward: Nvidia’s engineers felt that they had reached the limits with the shader version.

Concretely, switching back to tensor cores and using an AI model allows Nvidia to achieve better image quality, better handling of some pain points like motion, better low resolution support and a more flexible approach. Apparently this implementation for Control required a lot of hand tuning and was found to not work well with other types of games, whereas DLSS 2.0 back on the tensor cores is more generalized and more easily applicable to a wide range of games without per-game training."
 
Are they low-latency, though? That, to me, is the tricky part.
Yeah good question. I know their frame interpolator needs 45ms, and there are other stuff in modern TVs which are also temporal, it's all useless for gaming. HDR post processing also needs a few frames depending on the brand.

However historically the scaling is usually delayed by only 32 or 64 scanlines, maybe that changed.

Still, an ASIC like this would need to be a really small footprint to warrant it's inclusion. For example, if using the GPU required a 10% or 15% time slice then it might be worth it, but not if it's just freeing up 2%.

It seems that we only see asic blocks where the gain is gigantic. Codecs are always the best candidates. Not sure about scaling, it used to be worth it when it was really simple algorithms but this time it requires a lot more memory access than inline hardwired stuff.
 
Concretely, switching back to tensor cores and using an AI model allows Nvidia to achieve better image quality, better handling of some pain points like motion, better low resolution support and a more flexible approach. Apparently this implementation for Control required a lot of hand tuning and was found to not work well with other types of games, whereas DLSS 2.0 back on the tensor cores is more generalized and more easily applicable to a wide range of games without per-game training."
Would like a poll too see how many believe this.
 
How possible is it that MS could be incorporating some kind of special machine learning inhouse technology (or one worked with AMD) where the lower res image will ll be reconstructed to 8K and that might consume less performance than actually rendering native 8k?
 
Status
Not open for further replies.
Back
Top